Mozilla Localization (L10N)Mozilla Localization in 2025

A Year in Data

As is tradition, we’re wrapping up 2025 for Mozilla’s localization efforts and offering a sneak peek at what’s in store for 2026 (you can find last year’s blog post here).

Pontoon’s metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.

  • 30 projects and 469 locales (+100 compared to 2024) set up in Pontoon.
  • 5,019 new user registrations
  • 1,190 active users, submitting at least one translation, on average 233 users per month (+5% Year-over-Year)
  • 551,378 submitted translations (+18% YoY)
  • 472,195 approved translations (+22% YoY)
  • 13,002 new strings to translate (-38% YoY).

The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.

As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).

Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla’s localization over the last 12 months — or plan to do so in 2026. There is always space for new contributors!

Pontoon Development

A significant part of the work on Pontoon in 2025 isn’t immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.

One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.

In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.

Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.

Finally, we’ve been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we’re actively working toward feature parity with Transvision to better support automation and integrations.

Community

Our main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.

Group dinner for the localization event in BerlinThis doesn’t mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla’s localization work possible.

Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we’ve helped reinforce the message that investing in people — not just tools — is essential to the long-term health of Mozilla’s localization ecosystem.

What’s coming in 2026

As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.

This excerpt comes from last year’s blog post, and while it took longer than expected, the good news is that we’re finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.

In parallel, we “silently” launched the Mozilla Language Portal, a unified hub that reflects Mozilla’s unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.

On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon’s translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.

We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.

Last but not least, we plan to keep investing in our community. While we don’t know yet what that will look like in practice, keep an eye on this blog for updates.

If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!

Thank you!

As we look toward 2026, we’re grateful for the people who make Mozilla’s localization possible. Through shared effort and collaboration, we’ll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.

Ludovic HirlimannAre mozilla's fork any good?

To answer that question, we first need to understand how complex, writing or maintaining a web browser is. 

A "modern" web browser is :

  • a network stack,
  • and html+[1] parser, 
  • and image+[2] decoder,
  • a javascript[3] interpreter compiler,
  • a User's interface,
  • integration with the underlying OS[4],
  • And all the other things I'm currently forgetting. 

 Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.

In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure. 

Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.

 When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.

 The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.

Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that  for a fact because both Camino and Thunderbird are embeding gecko.

  In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :

  1. Keep the web standards open, so anyone can implement a web browser / web services.
  2. to have enough traffic to be able to pay all the engineers working on gecko. 

Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"

I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :

  • following security patches and porting said patches.
  • following development and maintain their branch with changes coming all over the place
  • how do they test?

 If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.

 

Now, I've set a tone above that foresees a dark future for open web technologies.  What Can you do to keep the web open and with some privacy focus?

  1. Keep using Mozilla Nightly
  2. Give servo a try 

 

 

[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.

[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.

[3] Is a computer language that transforms HTML into something that can interact with the person using the web browser. See https://developer.mozilla.org/en-US/docs/Glossary/JavaScript

[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.

[5]https://math.answers.com/math-and-arithmetic/How_many_lines_of_code_in_mozillafirefox 

[6] Testing implies, testing the code and also having engineers or users using the unfinished product to see that it doesn't regress. Testing Mozilla, is explained at https://ehsanakhgari.org/wp-content/uploads/talks/test-mozilla/

[7] Read a release equals a version. Version 1.5 is a release, as is version 3.0.1.

[8] https://en.wikipedia.org/wiki/Browser_wars 

Wladimir PalantBackdoors in VStarcam cameras

VStarcam is an important brand of cameras based on the PPPP protocol. Unlike the LookCam cameras I looked into earlier, these are often being positioned as security cameras. And they in fact do a few things better like… well, like having a mostly working authentication mechanism. In order to access the camera one has to know its administrator password.

So much for the theory. When I looked into the firmware of the cameras I discovered a surprising development: over the past years this protection has been systematically undermined. Various mechanisms have been added that leak the access password, and in several cases these cannot be explained as accidents. The overall tendency is clear: for some reason VStarcam really wants to have access to their customer’s passwords.

A reminder: “P2P” functionality based on the PPPP protocol means that these cameras will always communicate with and be accessible from the internet, even when located on a home network behind NAT. Short of installing a custom firmware this can only addressed by configuring the network firewall to deny internet access.

How to recognize affected cameras

Not every VStarcam camera has “VStarcam” printed on the side. I have seen reports of VStarcam cameras being sold under the brand names Besder, MVPower, AOMG, OUSKI, and there are probably more.

Most cameras should be recognizable by the app used to manage them. Any camera managed by one of these apps should be a VStarcam camera: Eye4, EyeCloud, FEC Smart Home, HOTKam, O-KAM Pro, PnPCam, VeePai, VeeRecon, Veesky, VKAM, VsCam, VStarcam Ultra.

Downloading the firmware

VStarcam cameras have a mechanism to deliver firmware updates (LookCam cameras prove that this shouldn’t be taken for granted). The app managing the camera will request update information from an address like http://api4.eye4.cn:808/firmware/1.2.3.4/EN where 1.2.3.4 is the firmware version. If a firmware update is available the response will contain a download server and a download path. The app sends these to the device which then downloads and installs the updated firmware.

Both requests are performed over plain HTTP and this is already the first issue. If an attacker can produce a manipulated response either on the network that the app or the device are connected to they will be able to install a malicious update on the camera. The former is particularly problematic, as the camera owner may connect to an open WiFi or similarly untrusted networks while being out.

The last part of a firmware version is a build number which is ignored for the update requests. The first part is a vendor ID where only a few options seem relevant (I checked 10, 48 and 66). The rest of the version number can be easily enumerated. Many firmware branches don’t have an active update, and when they do some updates won’t download because the servers in question appear no longer operational. Still, I found 380 updates this way.

I managed to unpack all but one of these updates. Firmware version 10.1.110.2 wasn’t for a camera but rather some device with an HDMI connector and without any P2P functionality – probably a Network Video Recorder (NVR). Firmware version 10.121.160.42 wasn’t using PPPP but something called NHEP2P and an entirely different application-level protocol. Ten updates weren’t updating the camera application but only the base system. This left 367 firmware versions for this investigation.

Caveats of this survey

I do not own any VStarcam hardware, nor would it be feasible to investigate hundreds of different firmware versions with real hardware. The results of this article are based solely on reverse engineering, emulation, and automated analysis via running Ghidra in headless mode. While I can easily emulate a PPPP server, doing the same for the VStarcam cloud infrastructure isn’t possible, I simply don’t know how it behaves. Similarly, the firmware’s interaction with hardware had to be left out of the emulation. While I’m still quite confident in my results, these limitations could introduce errors.

More importantly, there are only so many firmware versions that I checked manually. Most of them were checked automatically, and I typically only looked at a few lines of decompiled code that my scripts extracted. There is potential for false negatives here, I expect that there are more issues with VStarcam firmware than what’s listed here.

VStarcam’s authentication approach

When an app communicates with a camera, it sends commands like GET /check_user.cgi?loginuse=admin&loginpas=888888&user=admin&pwd=888888. Despite the looks of it, these aren’t HTTP requests passed on to a web server. Instead, the firmware handles these in function P2pCgiParamFunction which doesn’t even attempt to parse the request. The processing code looks for substrings like check_user.cgi to identify the command (yes, you better don’t set check_user.cgi as your access password). Parameter extraction works via similar substring matching.

It’s worth noting that these cameras have a very peculiar authentication system which VStarcam calls “dual authentication.” Here is how the Eye4 application describes it:

The dual authentication mechanism is a measure to upgrade the whole system security

  1. The device will double check the identity of the visitor and does not support the old version of app.
  2. Considering the security risk of possible leakage, the plaintext password mode of the device was turned off and ciphertext access was used.
  3. After the device is added for the first time, it will not be allowed to be added for a second time, and it will be shared by the person who has added it.

I’m not saying that this description is utter bullshit but there is a considerable mismatch with the reality that I can observe. The VStarcam firmware cannot accept anything other than plaintext passwords. Newer firmware versions employ obfuscation on the PPPP-level but this hardly deserves the name “ciphertext”.

What I can see is: once a device is enrolled into dual authentication, the authentication is handled by function GetUserPri_doubleVerify rather than GetUserPri. There isn’t a big difference between the two, both will try the credentials from the loginuse/loginpas parameters and fall back to the user/pwd credentials pair. Function GetUserPri_doubleVerify merely checks a different password.

From the applications I get the impression that the dual authentication password is automatically generated and probably not even shared with the user but stored in their cloud account. This is an improvement over the regular password that defaults to 888888 and allowed these cameras to be enrolled into a botnet. But it’s still a plaintext password used for authentication.

There is a second aspect to dual authentication. When dual authentication is used, the app is supposed to make a second authentication call to eye4_authentication.cgi. The loginAccount and loginToken parameters here appear to belong to the user’s cloud account, apparently meant to make sure that only the right user can access a device.

Yet in many firmware versions I’ve seen the eye4_authentication.cgi request always succeeds. The function meant to perform a web request is simply hardcoded to return the success code 200. Other firmware versions actually make a request to https://verification.eye4.cn, yet this server also seems to produce a 200 response regardless of what parameters I try. It seems that VStarcam never made this feature work the way they intended it.

None of this stopped VStarcam from boasting on their website merely a year ago:

A promotion image with the following text: O-KAM Pro. Dual authentication mechanism. AES financial grade encryption + dual authentication. We highly protect your data and privacy. Server distribution: low-power devices, 4 master servers, namely Hangzhou, Hong Kong, Frankfurt, Silicon Valey, etc.

You can certainly count on anything saying “financial grade encryption” being bullshit. I have no idea where AES comes into the picture here, I haven’t seen it being used anywhere. Maybe it’s their way of saying “we use TLS when connecting to our cloud infrastructure.”

Endpoint protection

A reasonable approach to authentication is: authentication is required before any requests unrelated to authentication can be made. This is not the approach taken by VStarcam firmware. Instead, some firmware versions decide for each endpoint individually whether authentication is necessary. Other versions put a bunch of endpoints outside of the code enforcing authentication.

The calls explicitly excluded from authentication differ by firmware version but are for example: get_online_log.cgi, show_prodhwfg.cgi, ircut_test.cgi, clear_log.cgi, alexa_ctrl.cgi, server_auth.cgi. For most of these it isn’t obvious why they should be accessible to unauthenticated users. But get_online_log.cgi caught my attention in particular.

Unauthenticated log access

So a request like GET /get_online_log.cgi?enable=1 can be sent to a camera without any authentication. This isn’t a request that any of the VStarcam apps seem to support, what does it do?

Despite the name this isn’t a download request, it rather sets a flag for the current connection. The logic behind this involves many moving parts including a Linux kernel module but the essence is this: whenever the application logs something via LogSystem_WriteLog function, the application won’t merely print that to stderr and write it to the log file on the SD card but also send it to any connection that has this flag set.

What does the application log? Lots and lots of stuff. On average, VStarcam firmware has around 1500 such logging calls. For example, it could log security tokens:

LogSystem_WriteLog("qiniu.c", "upload_qiniu", 497, 0,
                   "upload_qiniu*** filename = %s, fileid = %s, uptoken = %s\n", );
LogSystem_WriteLog("pushservice.c", "parsePushServerRequest_cjson", 5281, 1,
                   "address=%s token =%s master= %d timestamp = %d", );
LogSystem_WriteLog("queue.c", "CloudUp_Manage_Pth", 347, 2,
                   "token=%s", );

It could log cloud server responses:

LogSystem_WriteLog("pushservice.c", "curlPostMqttAuthCb", 4407, 3,
                   "\n\nrspBuf = %s\n", );
LogSystem_WriteLog("post/postFileToCloud.c", "curl_post_file_cb", 74, 0,
                   "\n\nrspBuf = %s\n", );
LogSystem_WriteLog("pushserver.c", "curl_Eye4Authentication_write_data_cb", 2822, 0,
                   "rspBuf = %s", );

And of course it will log the requests coming in via PPPP:

LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
                   "sit %d, pcmd: %s", );

Reminder: these requests contain the authentication password as parameter. So an attacker can connect to a vulnerable device, request logs and wait for the legitimate device owner to connect. Once they do their password will show up in the logs – voila, the attacker has access now.

VStarcam appears to be at least somewhat aware of this issue because some firmware versions contain code “censoring” password parameters prior to logging:

memcpy(pcmd, request, sizeof(pcmd));
char* pos = strstr(pcmd, "loginuse");
if (pos)
  *pos = 0;
LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
                   "sit %d, pcmd: %s", sit, pcmd);

But that’s only the beginning of the story of course.

Explicit password leaking via logs

In addition to the logging calls where the password leaks as a (possibly unintended) side-effect, some logging calls are specifically designed to write the device password to the log. For example, the function GetUserPri meant to handle authentication when dual authentication isn’t enabled will often do something like this on a failed login attempt:

LogSystem_WriteLog("sysparamapp.c", "GetUserPri", 177, 0,
                   "loginuse=%s&loginpas=%s&user=admin&pwd=888888&", gUser, gPassword);

These aren’t the parameters of a received login attempt but rather what the parameters should look like for the request to succeed. And if the attacker enabled log access for their connection they will get the device credentials handed on a silver platter – without even having to wait for the device owner to connect.

If dual authentication is enabled, function GetUserPri_doubleVerify often contains a similar call:

LogSystem_WriteLog("web.c", "GetUserPri_doubleVerify", 536, 0,
                   "pri[%d] system OwnerPwd[%s] app Pwd[%s]",
                   pri, gOwnerPassword, gAppPassword);

Log uploading

What got me confused at first were the firmware versions that would log the “correct” password on failed authentication attempts but lacked the capability for unauthenticated log access. When I looked closer I found the function DoSendLogToNodeServer. The firmware receives a “node configuration” from a server which includes a “push IP” and the corresponding port number. It then opens a persistent TCP connection to that address (unencrypted of course), so that DoSendLogToNodeServer can send messages to it.

Despite the name this function doesn’t upload all of the application logs. There are only three to four DoSendLogToNodeServer calls in the firmware versions I looked at, and two are invariably found in function P2pCgiParamFunction, in code running on first failed authentication attempt:

sprintf(buffer,"password error [doublePwd][%s], [PassWd][%s]", gOwnerPassword, gPassword);
DoSendLogToNodeServer(request);
DoSendLogToNodeServer(buffer);

This is sending both the failed authentication request and the correct passwords to a VStarcam server. So while the password isn’t being leaked here to everybody who knows how to ask, it’s still being leaked to VStarcam themselves. And anybody who is eavesdropping on the device’s traffic of course.

A few firmware versions have log upload functionality in a function called startUploadLogToServer, here really all logging output is being uploaded to the server. This one isn’t called unconditionally however but rather enabled by the setLogUploadEnable.cgi endpoint. An endpoint which, you guessed it, can be accessed without authentication. But at least these firmware versions don’t seem to have any explicit password logging, only the “regular” logging of requests.

Password-leaking backdoor

With some considerable effort all of the above could be explained as debugging functionality which was mistakenly shipped to production. VStarcam wouldn’t be the first company to fail realizing that functionality labeled “for debugging purposes only” will still be abused if released with the production build of their software. But I found yet another password leak which can only be described as a backdoor.

At some point VStarcam introduced a second version of their get_online_log.cgi API. When that second version is requested the device will respond with something like:

result=0;
index=12345678;
str=abababababab;

The result=0 part is typical and indicates that authentication (or lack thereof in this case) was successful. The other two values are unusual, and eventually I decided to check what they were about. Turned out, str is a hex-encoded version of the device password after it was XOR’ed with a random byte. And index is an obfuscated representation of that byte.

I can only explain it like this: somebody at VStarcam thought that leaking passwords via log output was too obvious, people might notice. So they decided to expose the device password in a more subtle way, one that only they knew how to decode (unless somebody notices this functionality and spends two minutes studying it in the firmware).

Mind you, even though this is clearly a backdoor I’m still not ruling out incompetence. Maybe VStarcam made a large enough mess with their dual authentication that their customer support needs to recover device access on a regular basis. However, they do have device reset functionality that should normally be used for this scenario.

In the end, for their customers it doesn’t matter what the intention was. The result is a device that cannot be trusted with protecting access. For a security camera this is an unforgivable flaw.

Establishing a timeline

Now we are coming to the tough questions. Why do some firmware versions have this backdoor functionality while others don’t? When was this introduced? In what order? What is the current state of affairs?

You might think that after compiling the data on 367 firmware versions the answers would be obvious. But the data is so inconsistent that any conclusions are really difficult. Thing is, we aren’t dealing with a single evolving codebase here. We aren’t even dealing with two codebases or a dozen of them. 367 firmware versions are 367 different codebases. These codebases are related, they share some code here and there, but they are all being developed independently.

I’ve seen this development model before. What VStarcam appears to be doing is: for every new camera model they take some existing firmware and fork it. They adjust that firmware for the new hardware, they probably add new features as well. None of this work makes it into the original firmware unless it is explicitly backported. And since VStarcam is maintaining hundreds of firmware variants, the older ones are usually only receiving maintenance changes if any at all.

To make this mess complete, VStarcam’s firmware version numbers don’t make any sense at all. And I don’t mean the fact that VStarcam releases the same camera under 30 different model names, so there is no chance of figuring out the model to firmware version mapping. It’s also the firmware version numbers themselves.

As I’ve already mentioned, the last part of the firmware version is the build number, increased with each release. The first part is the vendor ID: firmware versions starting with 48 are VStarcam’s global releases whereas 66 is reserved for their Russian distributor (or rather was I think). Current VStarcam firmware is usually released with vendor ID 10 however, standing for… who knows, VeePai maybe? This leaves the two version parts in between, and I couldn’t find any logic here whatsoever. Like, firmware versions sharing the third part of the version number would sometimes be closely related, but only sometimes. At the same time the second part of the version number is supposed to represent the camera model, but that’s clearly not always correct either.

I ended up extracting all the logging calls from all the firmware versions and using that data to calculate a distance between every firmware version pair. I then fed this data into GraphViz and asked it to arrange the graph for me. It gave me the VStarcam spiral galaxy:

A graph with a number of green, yellow, orange, red and pink ovals, each containing a version number. The ovals aren’t distributed evenly but rather clustered. The color distribution also varies by cluster. Next image has more detailed descriptions of the clusters.

Click the image above to see the larger and slightly interactive version (it shows additional information when the mouse pointer is at a graph node). The green nodes are the ones that don’t allow access to device logs. Yellow are the ones providing unauthenticated log access, always logging incoming requests including their password parameters. The orange ones have additional logging that exposes the correct password on failed authentication attempts – or they call DoSendLogToNodeServer function to send the correct password to a VStarcam server. The red ones have the backdoor in the get_online_log.cgi API leaking passwords. Finally pink are the ones which pretend to improve things by censoring parameters of logged requests – yet all of these without exception leak the password via the backdoor in the get_online_log.cgi API.

Note: Firmware version 10.165.19.37 isn’t present in the graph because it is somehow based on an entirely different codebase with no relation to the others. It would be red in the graph however, as the backdoor has been implemented here as well.

Not only does this graph show the firmware versions as clusters, it’s also possible to approximately identify the direction of time for each cluster. Let’s add cluster names and time arrows to the image:

Clusters in the graph above marked with red letters A to F and blue arrows. A dense cluster of green node in the middle of the graph is marked as A. Left of it is cluster B with green node at its right edge that increasingly turn yellow towards the left edge. The blue arrow points from the cluster A to the left edge of cluster B. A small cluster below cluster A and B is labeled D, here green nodes at the top turn yellow and orange towards the bottom. Cluster E below cluster D has orange nodes at the top which increasingly turn pink towards the bottom with some green nodes in between. A blue arrow points from cluster D to the bottom of cluster E. A lengthy cluster at the top of the graph is labeled C, a blue arrow points from its left to its right edge. This cluster starts out green and mostly transitions towards orange along the time arrow. Finally the right part of the graph is occupy by a large cluster labeled F. The blue arrow starts at the orange nodes in the middle of this cluster and points into two directions: towards the mostly orange nodes at the bottom and towards the top where the orange nodes are first mostly replaced by the pink ones and then by red.

Of course this isn’t a perfect representation of the original data, and I wasn’t sure whether it could be trusted. Are these clusters real or merely an artifact produced by the graph algorithm? I verified things manually and could confirm that the clusters are in fact distinctly different on the technical level, particularly when considering updates format:

  • Clusters A and B represent firmware for ARM processors. I’m unsure what caused the gap between the two clusters but cluster A contains firmware from years 2019 and 2020, cluster B on the other hand is mostly years 2021 and 2022. Development pretty much stopped here, the only exception being the four red firmware versions which are recent. Updates use the “classic” ZIP format here.
  • Cluster C covers years 2019 to 2022. Quite remarkably, in these years the firmware from this cluster moved from ARM processors and LiteOS to MIPS processors and Linux. The original updates based on VStarcam Pack System were replaced by the VeePai-branded ZIP format and later by Ingenic updates with LZO compression. All that happened without introducing significant changes to the code but rather via incremental development.
  • Cluster D contains firmware for the MIPS processors from years 2022 and 2023. Updates are using the VeePai-branded ZIP format.
  • Cluster E formed around 2023, there is still some development being done here. It uses MIPS processors like cluster D, yet the update format is different (what I called VeePai updates in my previous blog post).
  • Cluster F has seen continuous development since approximately 2022, this is firmware based on Ingenic’s MIPS hardware and the most active branch of VStarcam development. Originally the VeePai-branded ZIP format was used for updates, this was later transitioned to Ingenic updates with LZO compression and finally to the same format with jzlcma compression.

With the firmware versions ordered like this I could finally make some conclusions about the introduction of the problematic features:

  • Unauthenticated logs access via the get_online_log.cgi API was introduced in cluster B around 2022.
  • Logging the correct password on failed attempts was introduced independently in cluster C. In fact, some firmware versions had this in 2020 already.
  • In 2021 cluster C also added the innovation that was DoSendLogToNodeServer function, sending the correct password to a VStarcam server on first failed login attempt.
  • Unauthenticated logs access and logging the correct password appear to have been combined in cluster D in 2023.
  • Cluster E initially also adopted the approach of exposing log access and logging device password on failed attempts, adding the sending of the correct password to a VStarcam server to the mix. However, starting in 2024 firmware versions with the get_online_log.cgi backdoor start popping up here, and these have all other password leaks removed. These even censor passwords in logged request parameters. Either there were security considerations at play or the other ways to expose the password were considered unnecessary at this point and too obvious.
  • Cluster F also introduced logging device password on failed attempts around 2023. This cluster appears to be the origin of the get_online_log.cgi backdoor, it was introduced here around 2024. Unlike with cluster E this backdoor didn’t replace the existing password leaks here but only complemented them. In fact, while cluster F was initially “censoring” parameters so that logged requests wouldn’t leak passwords, this measure appears to have been dropped later in 2024. Current cluster F firmware tends to have all the issues described in this post simultaneously. Whatever security considerations may have driven the changes in cluster E, the people in charge of cluster F clearly disagreed.

The impact

So, how bad is it? Knowing the access password allows access to the camera’s main functionality: audio and video recordings. But these cameras have been known for vulnerabilities allowing execution of arbitrary commands. Also, newer cameras have an API that will start a telnet server with hardcoded and widely known administrator credentials (older cameras had this telnet server start by default). So we have to assume that a compromised camera could become part of a botnet or be used as a starting point for attacks against a network.

But this requires accessing the camera first, and most VStarcam cameras won’t be exposed to the internet directly. They will only be reachable via the PPPP protocol. And for that the attackers would need to know the device ID. How would they get it?

There is a number of ways, most of which I’ve already discussed before. For example, anybody who was briefly connected to your network could have collected device IDs of your cameras. The script to do that won’t currently work with newer VStarcam cameras because these obfuscate the traffic on the PPPP level but the necessary adjustments aren’t exactly complicated.

PPPP networks still support “supernodes,” devices that help route traffic. Back in 2019 Paul Marrapese abused that functionality to register a rogue supernode and collect device IDs en masse. There is no indication that this trick stopped working, and the VStarcam networks are likely susceptible as well.

Users also tend to leak their device IDs themselves. They will post screenshots or videos of the app’s user interface. On the first glance this is less problematic with the O-KAM Pro app because this one will display only a vendor-specific device ID (looks similar to a PPPP device ID but has seven digits and only four letters in the verification code). That is, until you notice that the app uses a public web API to translate vendor-specific device IDs into PPPP device IDs.

Anybody who can intercept some PPPP traffic can extract the device IDs from it. Even when VStarcam networks obfuscate the traffic rather than using plaintext transmission – the static keys are well known, removing the obfuscation isn’t hard.

And finally, simply guessing device IDs is still possible. With only 5 million possible verification codes for each device IDs and servers not implementing rate limiting, bruteforce attacks are quite realistic.

Let’s not forget the elephant in the room however: VStarcam themselves know all the device IDs of course. Not just that, they know which devices are active and where. With a password they can access the cameras of interest to them (or their government) anytime.

Coordinated disclosure attempt

Given the intentional nature of these issues, I was unsure how to deal with this. I mean, what’s the point of reporting vulnerabilities to VStarcam that they are clearly aware of? In the end I decided to give them a chance to address the issues before they become public knowledge.

However, all I found was VStarcam boasting about their ISO 27001:2022 compliance. My understanding is that this requires them to have a dedicated person responsible for vulnerability management, but they are not obliged to list any security contact that can be reached from outside the company – and so they don’t. I ended up emailing all company addresses I could find, asking whether there is any way to report security issues to them.

I haven’t received any response, an experience that in my understanding other people already made with VStarcam. So I went with my initial publication schedule rather than waiting 90 days as I would normally do.

Recommendations

Whatever motives VStarcam had to backdoor their cameras, the consequence for the customers is: these cameras cannot be trusted. Their access protection should be considered compromised. Even with firmware versions shown as green on my map, there is no guarantee that I haven’t missed something or that these will still be green after the next update.

If you want to keep using a VStarcam camera, the only safe way to do it is disconnecting it from the internet. They don’t have to be disconnected physically, internet routers will often have a way to prohibit internet traffic to and from particular devices. My router for example has this feature under parental control.

Of course this will mean that you will only be able to control your camera while connected to the same network. It might be possible to explicitly configure port forwarding for the camera’s RTSP port, allowing you to access at least the video stream from outside. Just make sure that your RTSP password isn’t known to VStarcam.

Jonathan AlmeidaRebase all WIPs to the new main

A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!

The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.

jj git fetch
# assuming 'z' is the rev-id of the previous main.
jj rebase -s "mutable()&z+" -d main

I haven't learnt how to make aliases accept params with it yet, so this will have to do for now.

Update: After a bit of searching, it seems that today this is only possible by wrapping it in a shell script. Based on the examples in the jj documentation an alias would look like this:

[aliases]
# Update all revs to the latest main; point to the previous one.
hoist = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
jj rebase -s "mutable()&$1+" -d "main"
""", ""]

Wladimir PalantAnalysis of PPPP “encryption”

My first article on the PPPP protocol already said everything there was to say about PPPP “encryption”:

  • Keys are static and usually trivial to extract from the app.
  • No matter how long the original key, it is mapped to an effective key that’s merely four bytes long.
  • The “encryption” is extremely susceptible to known-plaintext attacks, usually allowing reconstruction of the effective key from a single encrypted packet.

So this thing is completely broken, why look any further? There is at least one situation where you don’t know the app being used so you cannot extract the key and you don’t have any traffic to analyze either. It’s when you are trying to scan your local network for potential hidden cameras.

This script will currently only work for cameras using plaintext communication. Other cameras expect a properly encrypted “LAN search” packet and will ignore everything else. How can this be solved without listing all possible keys in the script? By sending all possible ciphertexts of course!

TL;DR: What would be completely ridiculous with any reasonable protocol turned out to be quite possible with PPPP. There are at most 157,092 ways in which a “LAN search” packet can be encrypted. I’ve opened a pull request to have the PPPP device detection script adjusted.

Note: Cryptanalysis isn’t my topic, I am by no means an expert here. These issues are simply too obvious.

Mapping keys to effective keys

The key which is specified as part of the app’s “init string” is not being used for encryption directly. Nor is it being fed into any of the established key stretching algorithms. Instead, a key represented by the byte sequence <semantics>b1,b2,,bn<annotation encoding="application/x-tex">b_1, b_2, \ldots, b_n</annotation></semantics> is mapped to four bytes <semantics>k1,k2,k3,k4<annotation encoding="application/x-tex">k_1, k_2, k_3, k_4</annotation></semantics> that become the effective key. These bytes are calculated as follows (<semantics>x<annotation encoding="application/x-tex">\lfloor x \rfloor</annotation></semantics> means rounding down, <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> stands for the bitwise XOR operation):

<semantics>k1=(b1+b2++bn)mod256k2=(b1+b2++bn)mod256k3=(b1÷3+b2÷3++bn÷3)mod256k4=b1b2bn<annotation encoding="application/x-tex"> \begin{aligned} k_1 &= (b_1 + b_2 + \ldots + b_n) \mod 256\\ k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ k_3 &= (\lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor) \mod 256\\ k_4 &= b_1 \otimes b_2 \otimes \ldots \otimes b_n \end{aligned} </annotation></semantics>

In theory, a 4 byte long effective key means <semantics>2564=4,294,967,296<annotation encoding="application/x-tex">256^4 = 4{,}294{,}967{,}296</annotation></semantics> possible values. But that would only be the case if these bytes were independent of each other.

Redundancies within the effective key

Of course the bytes of the effective key are not independent. This is most obvious with <semantics>k2<annotation encoding="application/x-tex">k_2</annotation></semantics> which is completely determined by <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>:

<semantics>k2=(b1+b2++bn)mod256=(b1+b2++bn)mod256=k1mod256<annotation encoding="application/x-tex"> \begin{aligned} k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ &= -(b_1 + b_2 + \ldots + b_n) \mod 256\\ &= -k_1 \mod 256 \end{aligned} </annotation></semantics>

This means that we can ignore <semantics>k2<annotation encoding="application/x-tex">k_2</annotation></semantics>, bringing the number of possible effective keys down to <semantics>2563=16,777,216<annotation encoding="application/x-tex">256^3 = 16{,}777{,}216</annotation></semantics>.

Now let’s have a look at the relationship between <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics>. Addition and bitwise XOR operations are very similar, the latter merely ignores carry. This difference affects all the bits of the result but the lowest one, no carry to be considered here. This means that the lowest bits of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> are always identical. So <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> has only 128 possible values for any value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>, bringing the total number of effective keys down to <semantics>256256128=8,388,608<annotation encoding="application/x-tex">256 \cdot 256 \cdot 128 = 8{,}388{,}608</annotation></semantics>.

And that’s how far we can get considering only redundancies. It can be shown that a key can be constructed resulting in any combination of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> values. Similarly, it can be shown that any combination of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> is possible as long as the lowest bit is identical.

ASCII to the rescue

But the keys we are dealing with here aren’t arbitrary bytes. These aren’t limited to alphanumeric characters, some keys also contain punctuation, but they are all invariably limited to the ASCII range. And that means that the highest bit is never set in any of the <semantics>bi<annotation encoding="application/x-tex">b_i</annotation></semantics> values.

Which in turn means that the highest bit is never set in <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> due to the nature of the bitwise XOR operation. We can once again rule out half of the effective keys, for any given value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> there are only 64 possible values of <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics>. We now have <semantics>25625664=4,194,304<annotation encoding="application/x-tex">256 \cdot 256 \cdot 64 = 4{,}194{,}304</annotation></semantics> possible effective keys.

How large is n?

Now let’s have a thorough look at how <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> relates to <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>, ignoring the modulo operation at first. We are taking one third of each byte, rounding it down and summing that up. What if we were to sum up first and round down at the end, how would that relate? Well, it definitely cannot be smaller than rounding down in each step, so we have an upper bound here.

<semantics>b1÷3+b2÷3++bn÷3(b1+b2++bn)÷3<annotation encoding="application/x-tex"> \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor \leq \lfloor (b_1 + b_2 + \ldots + b_n) \div 3 \rfloor </annotation></semantics>

How much smaller can the left side get? Each time we round down this removes at most two thirds, and we do this <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> times. So altogether these rounding operations reduce the result by at most <semantics>n2÷3<annotation encoding="application/x-tex">n \cdot 2 \div 3</annotation></semantics>. This gives us a lower bound:

<semantics>(b1+b2++bnn2)÷3b1÷3+b2÷3++bn÷3<annotation encoding="application/x-tex"> \lceil (b_1 + b_2 + \ldots + b_n - n \cdot 2) \div 3 \rceil \leq \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor </annotation></semantics>

If <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is arbitrary these bounds don’t help us at all. But <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> isn’t arbitrary, the keys used for PPPP encryption tend to be fairly short. Let’s say that we are dealing with keys of length 16 at most which is a safe bet. If we know the sum of the bytes these bounds allow us to narrow down <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> to <semantics>162÷3=11<annotation encoding="application/x-tex">\lceil 16 \cdot 2 \div 3 \rceil = 11</annotation></semantics> possible values.

But we don’t know the sum of bytes. What we have is <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> which is that sum modulo 256, and the sum is actually <semantics>i256+k1<annotation encoding="application/x-tex">i \cdot 256 + k_1</annotation></semantics> where <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is some nonnegative integer. How large can <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> get? Remembering that we are dealing with ASCII keys, each byte has at most the value 127. And we have at most 16 bytes. So the sum of bytes cannot be higher than <semantics>12716=2032<annotation encoding="application/x-tex">127 \cdot 16 = 2032</annotation></semantics> (or 7F0 in hexadecimal). Consequently, <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is 7 at most.

Let’s write down the bounds for <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> now:

<semantics>(i256+k1n2)÷3j256+k3(i256+k1)÷3<annotation encoding="application/x-tex"> \lceil (i \cdot 256 + k_1 - n \cdot 2) \div 3 \rceil \leq j \cdot 256 + k_3 \leq \lfloor (i \cdot 256 + k_1) \div 3 \rfloor </annotation></semantics>

We have to consider this for eight possible values of <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>. Wait, do we really?

Once we move into modulo 256 space again, the <semantics>i256÷3<annotation encoding="application/x-tex">i \cdot 256 \div 3</annotation></semantics> part of our bounds (which is the only part dependent on <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>) will assume the same value after every three <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> values. So only three values of <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> are really relevant, say 0, 1 and 2. Meaning that for each value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> we have <semantics>311=33<annotation encoding="application/x-tex">3 \cdot 11 = 33</annotation></semantics> possible values for <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics>.

This gives us <semantics>2563364=540,672<annotation encoding="application/x-tex">256 \cdot 33 \cdot 64 = 540{,}672</annotation></semantics> as the number of possible effective keys. My experiments with random keys indicate that this should be pretty much as far down as it goes. There may still be more edge conditions rendering some effective keys impossible, but if these exist their impact is insignificant.

Not all effective keys are equally likely however, the <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> values at the outer edges of the possible range are very unlikely. So one could prioritize the keys by probability – if the total number weren’t already low enough to render this exercise moot.

How many ciphertexts is that?

We have the four byte plaintext F1 30 00 00 and we have 540,672 possible effective keys. How many ciphertexts does this translate to? With any reasonable encryption scheme the answer would be: slightly less than 540,672 due to a few unlikely collisions which could occur here.

But PPPP doesn’t use a reasonable encryption scheme. With merely four bytes of plaintext there is a significant chance that PPPP will only use part of the effective key for encryption, resulting in identical ciphertexts for every key sharing that part. I didn’t bother analyzing this possibility mathematically, my script simply generated all possible ciphertexts. So the exact answer is: 540,672 effective keys produce 157,092 ciphertexts.

And that’s why you should leave cryptography to experts.

Understanding the response

Now let’s say we send 157,092 encrypted requests. An encrypted response comes back. How do we decrypt it without knowing which of the requests was accepted?

All PPPP packets start with the magic byte F1, so the first byte of our response’s plaintext must be F1 as well. The “encryption” scheme used by PPPP allows translating that knowledge directly into the value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>. Now one could probably (definitely) guess more plaintext parts and with some clever tricks deduce the rest of the effective key. But there are only <semantics>3364=2,112<annotation encoding="application/x-tex">33 \cdot 64 = 2{,}112</annotation></semantics> possible effective keys for each value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> anyway. It’s much easier to simply try out all 2,112 possibilities and see which one results in a response that makes sense.

The response here is 24 bytes large, making ambiguous decryptions less likely. Still, my experiments show that in approximately 4% of the cases closely related keys will produce valid but different decryption results. So you will get two or more similar device IDs and any one of them could be correct. I don’t think that this ambiguity can be resolved without further communication with the device, but at least with my changes the script reliably detects when a PPPP device is present on the network.

Jonathan AlmeidaUpdate jj bookmarks to the latest revision

Got this one from another colleague as well but it seems like most folks use some version of this daily that it might be good to have this built-in.

Before I can jj git push my current bookmark to my remote, I need to update where my (tracked) bookmark is, to the latest change:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:00:22 451384bf <-- move 'main' here.
  TIL: Update remote bookmark to the latest revision
  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main git_head() 9ad7ce11
  TIL: Preserve image scale with ImageMagick
~

A quick one-liner jj tug does that for me:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:03:54 main* 6e7173b4
  TIL: Update remote bookmark to the latest revision
  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main@origin git_head() 9ad7ce11
  TIL: Preserve image scale with ImageMagick
~

The alias is quite straight-forward:

[aliases]
# Update your bookmarks to your latest rev.
tug = ["bookmark", "move", "--from", "heads(::@ & bookmarks())", "--to", "@"]

The Rust Programming Language BlogProject goals update — December 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"

Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

1 detailed update available.

Comment by @frank-king posted on 2025-12-18:
Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

5 detailed updates available.

Comment by @BennoLossin posted on 2025-12-07:

Since we have chosen virtual places as the new approach, we reviewed what open questions are most pressing for the design. Our discussion resulted in the following five questions:

  1. Should we have 1-level projections xor multi-level projections?
  2. What is the semantic meaning of the borrow checker rules (BorrowKind)?
  3. How should we add "canonical projections" for types such that we have nice and short syntax (like x~y or x.@y)?
  4. What to do about non-indirected containers (Cell, MaybeUninit, Mutex, etc)?
  5. How does one inspect/query Projection types?

We will focus on these questions in December as well as implementing FRTs.

Comment by @BennoLossin posted on 2025-12-12:

Canonical Projections

We have discussed canonical projections and come up with the following solution:

pub trait CanonicalReborrow: HasPlace {
    type Output<'a, P: Projection<Source = Self::Target>>: HasPlace<Target = P::Target>
    where
        Self: PlaceBorrow<'a, P, Self::Output<'a, P>>;
}

Implementing this trait permits using the syntax @$place_expr where the place's origin is of the type Self (for example @x.y where x: Self and y is an identifier or tuple index, or @x.y.z etc). It is desugared to be:

@<<Self as CanonicalReborrow>::Output<'_, projection_from_place_expr!($place_expr)>> $place_expr

(The names of the trait, associated type and syntax are not final, better suggestions welcome.)

Reasoning

  • We need the Output associated type to support the @x.y syntax for Arc and ArcRef.
  • We put the FRT and lifetime parameter on Output in order to force implementers to always provide a canonical reborrow, so if @x.a works, then @x.b also works (when b also is a field of the struct contained by x).
    • This (sadly or luckily) also has the effect that making @x.a and @x.b return different wrapper types is more difficult to implement and requires a fair bit of trait dancing. We should think about discouraging this in the documentation.
Comment by @BennoLossin posted on 2025-12-16:

Non-Indirected Containers

Types like MaybeUninit<T>, Cell<T>, ManuallyDrop<T>, RefCell<T> etc. currently do not fit into our virtual places model, since they don't have an indirection. They contain the place directly inline (and some are even repr(transparent)). For this reason, we currently don't have projections available for &mut MaybeUninit<T>.

Enter our new trait PlaceWrapper which these types implement in order to make projections available for them. We call these types place wrappers. Here is the definition of the trait:

pub unsafe trait PlaceWrapper<P: Projection<Source = Self::Target>>: HasPlace {
    type WrappedProjection: Projection<Source = Self>;

    fn wrap_projection(p: P) -> Self::WrappedProjection;
}

This trait should only be implemented when Self doesn't contain the place as an indirection (so for example Box must not implement the trait). When this trait is implemented, then Self has "virtual fields" available (actually all kinds of place projections). The name of these virtual fields/projections is the same as the ones of the contained place. But their output type is controlled by this trait.

As an example, here is the implementation for MaybeUninit:

impl<T, P: Projection<Source = T>> PlaceWrapper<P> for MaybeUninit<T> {
    type WrappedProjection = TransparentProjection<P, MaybeUninit<T>, MaybeUninit<P::Target>>;

    fn wrap_projection(p: P) -> Self::WrappedProjection {
        TransparentProjection(p, PhantomData, PhantomData)
    }
}

Where TransparentProjection will be available in the standard library defined as:

pub struct TransparentProjection<P, Src, Tgt>(P, PhantomData<Src>, PhantomData<Tgt>);

impl<P: Projection, Src, Tgt> Projection for TransparentProjection<P, Src, Tgt> {
    type Source = Src;
    type Target = Tgt;

    fn offset(&self) -> usize {
        self.0.offset()
    }
}

When there is ambiguity, because the wrapper and the wrapped types both have the same field, the wrapper's field takes precedence (this is the same as it currently works for Deref). It is still possible to refer to the wrapped field by first dereferencing the container, so x.field refers to the wrapper's field and (*x).field refers to the field of the wrapped type.

Comment by @BennoLossin posted on 2025-12-20:

Field-by-Field Projections vs One-Shot Projections

We have used several different names for these two ways of implementing projections. The first is also called 1-level projections and the second multi-level projections.

The field-by-field approach uses field representing types (FRTs), which represent a single field of a struct with no indirection. When writing something like @x.y.z, we perform the place operation twice, first using the FRT field_of!(X, y) and then again with field_of!(T, z) where T is the resulting type of the first projection.

The second approach called one-shot projections instead extends FRTs with projections, these are compositions of FRTs, can be empty and dynamic. Using these we desugar @x.y.z to a single place operation.

Field-by-field projections have the advantage that they simplify the implementation for users of the feature, the compiler implementation and the mental model that people will have to keep in mind when interacting with field projections. However, they also have pretty big downsides, which either are fundamental to their design or would require significant complification of the feature:

  • They have less expressiveness than one-shot projections. For example, when moving out a subsubfield of x: &own Struct by doing let a = @x.field.a, we have to move out field, which prevents us from later writing let b = @x.field.b. One-shot projections allow us to track individual subsubfields with the borrow checker.
  • Field-by-field projections also make it difficult to define type-changing projections in an inference friendly way. Projecting through multiple fields could result in several changes of types in between, so we would have to require only canonical projections in certain places. However, this requires certain intermediate types for which defining their safety invariants is very complex.

We additionally note that the single function call desugaring is also a simplification that also lends itself much better when explaining what the @ syntax does.

All of this points in the direction of proceeding with one-shot projections and we will most likely do that. However, we must note that the field-by-field approach might yield easier trait definitions that make implementing the various place operations more manageable. There are several open issues on how to design the field-by-field API in the place variation (the previous proposal did have this mapped out clearly, but it does not translate very well to places), which would require significant effort to solve. So at this point we cannot really give a fair comparison. Our initial scouting of the solutions revealed that they all have some sort of limitation (as we explained above for intermediate projection types for example), which make field-by-field projections less desirable. So for the moment, we are set on one-shot projections, but when the time comes to write the RFC we need to revisit the idea of field-by-field projections.

Comment by @BennoLossin posted on 2025-12-25:

Wiki Project

We started a wiki project at https://rust-lang.github.io/beyond-refs to map out the solution space. We intend to grow it into the single source of truth for the current state of the field projection proposal as well as unfinished and obsolete ideas and connections between them. Additionally, we will aim to add the same kind of information for the in-place initialization effort, since it has overlap with field projections and, more importantly, has a similarly large solution space.

In the beginning you might find many stub pages in the wiki, which we will work on making more complete. We will also mark pages that contain old or abandoned ideas as such as well as mark the current proposal.

This issue will continue to receive regular detailed updates, which are designed for those keeping reasonably up-to-date with the feature. For anyone out of the loop, the wiki project will be a much better place when it contains more content.

Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

1 detailed update available.

Comment by @aapoalas posted on 2025-12-17:

Purpose

A refresher on what we want to achieve here: the most basic form of reborrowing we want to enable is this:

// Note: not Clone or Copy
#[derive(Reborrow)]
struct MyMutMarker<'a>(...);

// ...

let marker: MyMarkerMut = MyMutMarker::new();
some_call(marker);
some_call(marker);

ie. make it possible for an owned value to be passed into a call twice and have Rust inject a reborrow at each call site to produce a new bitwise copy of the original value for the passing purposes, and mark the original value as disabled for reads and writes for the duration of the borrow.

A notable complication appears with implementing such reborrowing in userland using explicit cals when dealing with returned values:

return some_call(marker.reborrow());

If the borrowed lifetime escapes through the return value, then this will not compile as the borrowed lifetime is based on a value local to this function. Alongside convenience, this is the major reason for the Reborrow traits work.

CoerceShared is a secondary trait that enables equivalent reborrowing that only disables the original value for writes, ie. matching the &mut T to &T coercion.

Update

We have the Reborrow trait working, albeit currently with a bug in which the marker must be bound as let mut. We are working towards a working CoerceShared trait in the following form:

trait CoerceShared<Target: Copy> {}

Originally the trait had a type Target ADT but this turned out to be unnecessary, as there is no reason to particularly disallow multiple coercion targets. The original reason for using an ADT to disallow multiple coercion targets was based on the trait also having an unsafe method, at which point unscrupulous users could use the trait as a generic coercion trait. Because the trait method was found to be unnecessary, the fear is also unnecessary.

This means that the trait has better chances of working with multiple coercing lifetimes (think a collection of &muts all coercing to &s, or only some of them). However, we are currently avoiding any support of multiple lifetimes as we want to avoid dealing with rmeta before we have the basic functionality working.

"Flexible, fast(er) compilation"

Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @davidtwco posted on 2025-12-15:

rust-lang/rfcs#3873 is waiting on one checkbox before entering the final comment period. We had our sync meeting on the 11th and decided that we would enter FCP on rust-lang/rfcs#3874 and rust-lang/rfcs#3875 after rust-lang/rfcs#3873 is accepted. We've responded to almost all of the feedback on the next two RFCs and expect the FCP to act as a forcing-function so that the relevant teams take a look, they can always register concerns if there are things we need to address, and if we need to make any major changes then we'll restart the FCP.

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress Will not complete
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

1 detailed update available.

Comment by @folkertdev posted on 2025-12-01:

We did not receive the funding we needed to work on this goal, so no progress has been made.

Overall I think the improvements we felt comfortable promising are on the low side. Overall the amount of time spent in codegen for realistic changes to real code bases was smaller than expected, meaning that the improvements that cranelift can deliver for the end-user experience are smaller.

We still believe larger gains can be made with more effort, but did not feel confident in promising hard numbers.

So for now, let's close this.

Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

No detailed updates available.
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress Will not complete
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

@dropbear32, @osiewicz

No detailed updates available.

"Higher-level Rust"

Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-12-15:

Key developments

  • A fence length limit was added in response to T-lang feedback (https://github.com/rust-lang/rust/pull/149358)
  • Whether to disallow or lint for CR inside of a frontmatter is under discussion (https://github.com/rust-lang/rust/pull/149823)

Blockers

  • https://github.com/rust-lang/rust/pull/146377
  • rustdoc deciding on and implementing how they want frontmatter handled in doctests

"Unblocking dormant traits"

Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

1 detailed update available.

Comment by @cramertj posted on 2025-12-17:

Current status:

  • The RFC for auto impl supertraits has been updated to address SemVer compatibility issues.
  • There is a parsing PR kicking off an experimental implementation. The tracking issue for this experimental implementation is here.
In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

No detailed updates available.
Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

1 detailed update available.

Comment by @lcnr posted on 2025-12-15:

We've continued to fix a bunch of smaller issues over the last month. Tim (Theemathas Chirananthavat) helped uncover a new potential issue due to non-fatal overflow which we'll have to consider before stabilizing the new solver: https://github.com/rust-lang/trait-system-refactor-initiative/issues/258.

I fixed two issues myself in https://github.com/rust-lang/rust/pull/148823 and https://github.com/rust-lang/rust/pull/148865.

tiif with help by Boxy fixed query cycles when evaluating constants in where-clauses: https://github.com/rust-lang/rust/pull/148698.

@adwinwhite fixed a subtle issues involving coroutine witnesses in https://github.com/rust-lang/rust/pull/149167 after having diagnosed the underlying issue there last month. They've also fixed a smaller diagnostics issue in https://github.com/rust-lang/rust/pull/149299. Finally, they've also fixed an edge case of impl well-formedness checking in https://github.com/rust-lang/rust/pull/149345.

Shoyu Vanilla fixed a broken interaction of aliases and fudging in https://github.com/rust-lang/rust/pull/149320. Looking into fudging and HIR typeck Expectation handling also uncovered a bunch of broken edge-cases and I've openedhttps://github.com/rust-lang/rust/issues/149379 to track these separately.

I have recently spent some time thinking about the remaining necessary work and posted a write-up on my personal blog: https://lcnr.de/blog/2025/12/01/next-solver-update.html. I am currently trying to get a clearer perspective on our cycle handling while slowly working towards an RFC for the changes there. This is challenging as we don't have a good theoretical foundation here yet.

Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

2 detailed updates available.

Comment by @lqd posted on 2025-12-30:

This month's key developments were:

  • borrowck support in a-mir-formality has been progressing steadily — it has its own dedicated updates in https://github.com/rust-lang/rust-project-goals/issues/122 for more details
  • we were also able to find a suitable project for the master's student project on a-mir-formality (and they accepted and should start around February) and which will help expand our testing coverage for the polonius alpha as well.
  • tiif has kept making progress on fixing opaque type soundness issue https://github.com/rust-lang/trait-system-refactor-initiative/issues/159. It is the one remaining blocker for passing all tests. By itself it will not immediately fix the two remaining (soundness) issues with opaque type region liveness, but we'll able to use the same supporting code to ensure the regions are indeed live where they need to be.
  • I quickly cleaned up some inefficiencies in constraint conversion, it hasn't landed yet but it maybe won't need to because of the next item
  • but most of the time this month was spent on this final item: we have the first interesting results from the rewriting effort. After a handful of wrong starts, I have a branch almost ready to switch the constraint graph to be lazy and computed during traversal. It removes the need to index the numerous list of constraints, or to convert liveness data to a different shape. It thus greatly reduces the current alpha overhead (some rare cases look faster than NLLs but I don't yet know why, maybe due to being able to better use the sparseness, low connectivity of the constraint graph, and a small number of loans). The overhead wasn't entirely removed of course: the worst offending benchmark has a +5% wall-time regression, but icounts are worse looking (+13%). This was also only benchmarking the algorithm itself, without the improvements to the rest of borrowck mentioned in previous updates. I should be able to open a PR in the next couple days, once I figure out how to best convert the polonius mermaid graph dump to the new lazy localized constraint generation.
  • and finally, happy holidays everyone!
Comment by @lqd posted on 2025-12-31:
  • I should be able to open a PR in the next couple days

done in https://github.com/rust-lang/rust/pull/150551

Goals looking for help


Other goal updates

Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

4 detailed updates available.

Comment by @nikomatsakis posted on 2025-12-03:

PR https://github.com/rust-lang/a-mir-formality/pull/206 contains a "first draft" for the NLL rules. It checks for loan violations (e.g., mutating borrowed data) as well as some notion of outlives requirements. It does not check for move errors and there aren't a lot of tests yet.

Comment by @nikomatsakis posted on 2025-12-03:

The PR also includes two big improvements to the a-mir-formality framework:

  • support for (for_all) rules that can handle "iteration"
  • tracking proof trees, making it much easier to tell why something is accepted that should not be
Comment by @nikomatsakis posted on 2025-12-10:

Update: opened https://github.com/rust-lang/a-mir-formality/pull/207 which contains support for &mut, wrote some new tests (including one FIXME), and added a test for NLL Problem Case #3 (which behaved as expected).

One interesting thing (cc Ralf Jung) is that we have diverged from MiniRust in a few minor ways:

  • We do not support embedding value expressions in place expressions.
  • Where MiniRust has a AddrOf operator that uses the PtrType to decide what kind of operation it is, we have added a Ref MIR operation. This is in part because we need information that is not present in MiniRust, specifically a lifetime.
  • We have also opted to extend goto with the ability to take multiple successors, so that goto b1, b2 can be seen as "goto either b1 or b2 non-deterministically" (the actual opsem would probably be to always go to b1, making this a way to add "fake edges", but the analysis should not assume that).
Comment by @nikomatsakis posted on 2025-12-17:

Update: opened https://github.com/rust-lang/a-mir-formality/pull/210 with today's work. We are discussing how to move the checker to support polonius-alpha. To that end, we introduced feature gates (so that a-mir-formality can model nightly features) and did some refactoring of the type checker aiming at allowing outlives to become flow-sensitive.

C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

No detailed updates available.
Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

3 detailed updates available.

Comment by @BoxyUwU posted on 2025-12-30:

Since the last update both of my PRs I mentioned have landed, allowing for constructing ADTs in const arguments while making use of generic parameters. This makes MGCA effectively a "full" prototype where it can now fully demonstrate the core concept of the feature. There's still a lot of work left to do but now we're at the point of finishing out the feature :)

Once again huge thanks to camelid for sticking with me throughout this. Also thanks to errs, oli and lcnr for reviewing some of the work and chatting with me about possible impl decisions.

Some examples of what is possible with MGCA as of the end of this goal cycle:

#![feature(const_default, const_trait_impl, min_generic_const_args)]

trait Trait {
    #[type_const]
    const ASSOC: usize;
}

fn mk_array<T: const Default + Trait>() -> [T; T::ASSOC] {
    [const { T::default() }; _]
}
#![feature(adt_const_params, min_generic_const_args)]

fn foo<const N: Option<u32>>() {}

trait Trait {
    #[type_const]
    const ASSOC: usize;
}

fn bar<T: Trait, const N: u32>() {
    // the initializer of `_0` is a `N` which is a legal const argument
    // so this is ok.
    foo::<{ Some::<u32> { 0: N } }>();

    // this is allowed as mgca supports uses of assoc consts in the
    // type system. ie `<T as Trait>::ASSOC` is a legal const argument
    foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();

    // this on the other hand is not allowed as `N + 1` is not a legal
    // const argument
    foo::<{ Some::<u32> { 0: N + 1 } }>(); // ERROR
}

As for adt_const_params we now have a zulip stream specifically for discussion of the upcoming RFC and the drafting of the RFC: #project-const-generics/adt_const_params-rfc. I've gotten part of the way through actually writing the RFC itself though it's gone slower than I had originally hoped as I've also been spending more time thinking through the implications of allowing private data in const generics.

I've debugged the remaining two ICEs making adt_const_params not fully ready for stabilization and written some brief instructions on how to resolve them. One ICE has been incidentally fixed (though more masked) by some work that Kivooeo has been doing on MGCA. The other has been picked up by someone I'm not sure the github handle of so that will also be getting fixed soon.

Comment by @BoxyUwU posted on 2025-12-30:

Ah I forgot to mention, even though MGCA has a tonne of work left to do I expect it should be somewhat approachable for people to help out with. So if people are interested in getting involved now is a good time :)

Comment by @BoxyUwU posted on 2025-12-30:

Ah another thing I forgot to mention. David Wood spent some time looking into the name mangling scheme for adt_const_params stuff to make sure it would be fine to stabilize and it seems it is so that's another step closer to adt_const_params being stabilizable

Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

No detailed updates available.
Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

1 detailed update available.

Comment by @PLeVasseur posted on 2025-12-16:

Meeting notes here: FLS team meeting 2025-12-12

Key developments: We're close to completing the FLS release for 1.91.0, 1.91.1. We've started to operate as a team, merging a PR with the changelog entries, then opening up issues for each change required: ✅ #624(https://github.com/rust-lang/fls/issues/624), ✅ #625(https://github.com/rust-lang/fls/issues/625), ✅ #626(https://github.com/rust-lang/fls/issues/626), ⚠️ #623(https://github.com/rust-lang/fls/issues/623). #623(https://github.com/rust-lang/fls/issues/623) is still pending, as it requires a bit of alignment with the Reference on definitions and creation of a new example. Blockers: None currently Help wanted: We'd love more folks from the safety-critical community to contribute to picking up issues or opening an issue if you notice something is missing.

Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

1 detailed update available.

Comment by @icmccorm posted on 2025-12-16:

Here's our December status update!

  • We have revised our prototype of the pre-RFC based on Ralf Jung's feedback. Now, instead of having two different retag functions for operands and places, we emit a single __rust_retag intrinsic in every situation. We also track interior mutability precisely. At this point, the implementation is mostly stable and seems to be ready for an MCP.

  • There's been some discussion here and in the pre-RFC about whether or not Rust will still have explicit MIR retag statements. We plan on revising our implementation so that we no longer rely on MIR retags to determine where to insert our lower-level retag calls. This should be a relatively straightforward change to the current prototype. If anything, it should make these changes easier to merge upstream, since they will no longer affect Miri.

  • BorrowSanitizer continues to gain new features, and we've started testing it on our first real crate (lru) (which has uncovered a few new bugs in our implementation). The two core Tree Borrows features that we have left to support are error reporting and garbage collection. Once these are finished, we will be able to expand our testing to more real-world libraries and confirm that we are passing each of Miri's test cases (and likely find more bugs lurking in our implementation). Our instrumentation pass ignores global and thread-local state for now, and it does not support atomic memory accesses outside of atomic load and store instructions. These operations should be relatively straightforward to add once we've finished higher-priority items.

  • Performance is slow. We do not know exactly how slow yet, since we've been focusing on feature support over benchmarking and optimization. This is at least partially due to the lack of garbage collection, based on what we're seeing from profiling. We will have a better sense of what our performance is like once we can compare against Miri on more real-world test cases.

As for what's next, we plan on posting an MCP soon, now that it's clear that we will be able to do without MIR retags. You can expect a more detailed status update on BorrowSanitizer by the end of January. This will discuss our implementation and plans for 2026. We will post that here and on our project website.

Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

1 detailed update available.

Comment by @joshtriplett posted on 2025-12-17:

In addition to further ongoing work on reference material (some of which is on track to be merged), we've had some extensive discussions about reference processes, maintenance, and stability markers. Niko Matsakis is putting together a summary and proposal for next steps.

Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

No detailed updates available.
Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

2 detailed updates available.

Comment by @ZuseZ4 posted on 2025-12-02:

It's only been two weeks, but we got a good number of updates, so I already wanted to share them.

autodiff

  1. On the autodiff side, we landed the support for rlib and better docs. This means that our autodiff frontend is "almost" complete, since there are almost no cases left where you can't apply autodiff. There are a few features like custom-derivatives or support for dyn arguments that I'd like to add, but they are currently waiting for better docs on the Enzyme side. There is also a long-term goal off replacing the fat-lto requirement with the less invasive embed-bc requirement, but this proved to be tricky in the past and only affects compile times.
  2. @sgasho picked up my old PR to dlopen enzyme, and found the culprit of it failing after my last rebase. A proper fix might take a bit longer, but it might be worth waiting for. As a reminder, using dlopen in the future allows us to ship autodiff on nightly without increasing the size of rustc and therefore without making our infra team sad.

All in all, we have landed most of the hard work here, so that's a very comfortable position to be in before enabling it on nightly.

offload

  1. We have landed the intrinsic implementation of Marcelo Domínguez, so now you can offload functions with almost arbitrary arguments. In my first prototype, I had limited it to pointers to 256 f64 values. The updated usage example continues to live here in our docs. As you can see, we still require #[cfg(target_os=X)] annotations. Under the hood, the LLVM-IR which we generate is also still a bit convoluted. In his next PRs, he'll clean up the generated IR, and introduce an offload macro that users shall call instead of the internal offload intrinsic.
  2. I spend more time on enabling offload in our CI, to enable std::offload in nightly. After multiple iterations and support from LLVM offload devs, we found a cmake config that does not run into bugs, should not increase Rust CI time too much, and works with both in-tree llvm/clang builds, as well as external clang's (the current case in our Rust CI).
  3. I spend more time on simplifying the usage instructions in the dev guide. We started with two cargo calls, one rustc call, two clang calls, and two clang-helper binary calls. I was able to remove the rustc and one of the clang-offload-packager calls, by directly calling the underlying LLVM APIs. I also have an unmerged PR which removes the two clang calls. Once I cleaned it up and landed it, we would be down to only two cargo calls and one binary call to clang-linker-wrapper. Once I automated this last wrapper (and enabled offload in CI), nightly users should be able to experiment with std::offload.
Comment by @ZuseZ4 posted on 2025-12-26:

Time for the next round of updates. Again, most of the updates were on the GPU side, but with some notable autodiff improvements too.

autodiff:

  1. @sgasho finished his work on using dlopen to load enzyme and the pr landed. This allowed Jakub Beránek and me to start working on distributing Enzyme via a standalone component.

  2. As a first step, I added a nicer error if we fail to find or dlopen our Enzyme backend. I also removed most of our autodiff fallbacks, we now unconditionally enable our macro frontend on nightly: https://github.com/rust-lang/rust/pull/150133 You may notice that cargo expand now works on autodiff code. This also allowed the first bug reports about ICE (internal compiler error) in our macro parser logic.

  3. Kobzol opened a PR to build Enzyme in CI. In theory, I should have been able to download that artifact, put it into my sysroot, and use the latest nightly to automatically load it. If that had worked, we could have just merged his PR, and everyone could have started using AD on nightly. Of course, things are never that easy. Even though both Enzyme, LLVM, and rustc were built in CI, the LLVM version shipped along with rustc does not seem compatible with the LLVM version Enzyme was built against. We assume some slight cmake mismatch during our CI builds, which we will have to debug.

offload:

  1. On the gpu side, Marcelo Domínguez finished his cleanup PR, and along the way also fixed using multiple kernels within a single codebase. When developing the offload MVP I had taken a lot of inspiration from the LLVM-IR generated by clang - and it looks like I had gotten one of the (way too many) LLVM attributes wrong. That caused some metadata to be fused when multiple kernels are present, confusing our offload backend. We started to find more bugs when working on benchmarks, more about the fixes for those in the next update.

  2. I finished cleaning up my offload build PR, and Oliver Scherer reviewed and approved it. Once the dev-guide gets synced, you should see much simpler usage instructions. Now it's just up to me to automate the last part, then you can compile offload code purely with cargo or rustc. I also improved how we build offload, which allows us to build it both in CI and locally. CI had some very specific requirements to not increase build times, since our x86-64-dist runner is already quite slow.

  3. Our first benchmarks directly linked against NVIDIA and AMD intrinsics on llvm-ir level. However, we already had an nvptx Rust module for a while, and since recently also an amdgpu module which nicely wraps those intrinsics. I just synced the stdarch repository into rustc a few minutes ago, so from now on, we can replace both with the corresponding Rust functions. In the near future we should get a higher level GPU module, which abstracts away naming differences between vendors.

  4. Most of my past rustc contributions were related to LLVM projects or plugins (Offload and Enzyme), and I increasingly encountered myself asking other people for updates or backports of our LLVM submodule, since upstream LLVM has fixes which were not yet merged into our LLVM submodule. Our llvm working group is quite small and I didn't want to burden them too much with my requests, so I recently asked them to join it, which also got approved. In the future I intend to help a little with the maintenance here.

Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

1 detailed update available.

Comment by @tomassedovic posted on 2025-12-05:

Update from the 2025-12-03 meeting:

-Zharden-sls

Wesley reviewed it again, provided a qualification, more changes requested.

Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

2 detailed updates available.

Comment by @tomassedovic posted on 2025-12-05:

Update from the 2025-12-03 meeting.

Deref / Receiver

Ding keeps working on the Reference draft. The idea is still not well-proliferated and people are not convinced this is a good way to go. We hope the method-probing section in Reference PR could clear thins up.

We're keeping the supertrait auto-impl experiment as an alternative.

RFC #3851: Supertrait Auto-impl

Ding addressed Predrag's requests on SemVer compatibility. He's also opened an implementation PR: https://github.com/rust-lang/rust/pull/149335. Here's the tracking issue: https://github.com/rust-lang/rust/issues/149556.

derive(CoercePointee)

Ding opened a PR to require additional checks for DispatchFromDyn: https://github.com/rust-lang/rust/pull/149068

In-place initialization

Ding will prepare material for a discussion at the LPC (Linux Plumbers Conference). We're looking to hear feedback on the end-user syntax for it.

The feature is going quite large, Ding will check with Tyler on the whether this might need a series of RFCs.

The various proposals on the table continue being discussed and there are signs (albeit slow) of convergence. The placing function and guaranteed return ones are superseded by outpointer. The more ergonomic ideas can be built on top. The guaranteed value placement one would be valuable in the compiler regardless and we're waiting for Olivier to refine it.

The feeling is that we've now clarified the constraints that the proposals must operate under.

Field projections

Nadri's Custom places proposal is looking good at least for the user-facing bits, but the whole thing is growing into a large undertaking. Benno's been focused on academic work that's getting wrapped up soon. The two will sync afterwards.

Comment by @tomassedovic posted on 2025-12-18:

Quick bit of great news: Rust in the Linux kernel is no longer treated as an experiment, it's here to stay 🎉

https://lwn.net/SubscriberLink/1050174/63aa7da43214c3ce/

Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

3 detailed updates available.

Comment by @sladyn98 posted on 2025-12-03:

Ed Page hey i would like to contribute to this I reached out on zulip. Bumping up the post in case it might have gone under the radar

CC Niko Matsakis

Comment by @epage posted on 2025-12-03:

The work is more on the compiler side atm, so Eric Holk and b-naber could speak more to where they could use help.

Comment by @eholk posted on 2025-12-06:

Hi @sladyn98 - feel free to ping me on Zulip about this.

MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

1 detailed update available.

Comment by @Amanieu posted on 2025-12-17:

The RFC draft was reviewed in detail and Ralf Jung pointed out that the proposed semantics introduce issues because they rely on "no-behavior" (NB) with regards to choosing an address for a local. This can lead to surprising "time-traveling" behavior where the set of possible addresses that a local may have (and whether 2 locals can have the same address) depends on information from the future. For example:

// This program has DB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
x = String::new(); // assuming this does not change the address of x
// x and y are both live here. Therefore, they can't have the same address.
assume(xaddr != yaddr);
drop(x);
drop(y);
// This program has UB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
// So far, there has been no constraint that would force the addresses to be different.
// Therefore we can demonically choose them to be the same. Therefore, this is UB.
assume(xaddr != yaddr);
// If the addresses are the same, this next line triggers NB. But actually this next
// line is unreachable in that case because we already got UB above...
x = String::new();
// x and y are both live here.
drop(x);
drop(y);

With that said, there is still a possibility of achieving the optimization, but the scope will need to be scaled down a bit. Specifically, we would need to:

  • no longer perform a "partial free"/"partial allocation" when initializing or moving out of a single field of a struct. The lifetime of a local starts when any part of it is initialized and ends when it is fully moved out.
  • allow a local's address to change when it is re-initialized after having been fully moved out, which eliminates the need for NB.

This reduces the optimization opportunities since we can't merge arbitrary sub-field moves, but it still allows for eliminating moves when constructing a struct from multiple values.

The next step is for me to rework the RFC draft to reflect this.

Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

No detailed updates available.
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

2 detailed updates available.

Comment by @weihanglo posted on 2025-12-13:

Key developments: HTML replay logic has merge. Once it gets into nightly cargo report timings can open the timing report you have previously logged.

  • https://github.com/rust-lang/cargo/pull/16377
  • https://github.com/rust-lang/cargo/pull/16378
  • https://github.com/rust-lang/cargo/pull/16382

Blockers: No, except my own availability

Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575

Comment by @weihanglo posted on 2025-12-26:

Key developments:

Headline: You should always enable build analysis locally, if you are using nightly and want the timing info data always available.

[unstable]
build-analysis = true

[build.analysis]
enabled = true
  • More log events are emitted: https://github.com/rust-lang/cargo/pull/16390
    • dependency resolution time
    • unit-graph construction
    • unit-registration (which contain unit metadata)
  • Timing replay from cargo report timings now has almost the same feature parity as cargo build --timings, except CPU usage: https://github.com/rust-lang/cargo/pull/16414
  • Rename rebuild event to unit-fingerprint, and is emitted also for fresh unit: https://github.com/rust-lang/cargo/pull/16408.
  • Proposed a new cargo report sessions command so that people can retrieve previous sessions IDs not use the latest one: https://github.com/rust-lang/cargo/pull/16428
  • Proposed to remove --timings=json which timing info in log files should be a great replacement: https://github.com/rust-lang/cargo/pull/16420
  • Documenting efforts for having man pages for nested commands `cargo report : https://github.com/rust-lang/cargo/pull/16430 and https://github.com/rust-lang/cargo/pull/16432

Besides implementations, we also discussed about:

  • The interaction of --message-format and structured logging system, as well as log event schemas and formats: https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/build.20analysis.20log.20format/with/558294271
  • A better name for RunId. We may lean towards SessionId which is a common name for logging/tracing ecosystem.
  • Nested Cargo calls to have a sticky session ID. At least a way to show they were invoked from the same top-level Cargo call.

Blockers: No, except my own availability

Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575

reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

1 detailed update available.

Comment by @oli-obk posted on 2025-12-15:

Updates

  • https://github.com/rust-lang/rust/pull/148820 adds a way to mark functions and intrinsics as only callable during CTFE
  • https://github.com/rust-lang/rust/pull/144363 has been unblocked and just needs some minor cosmetic work

Blockers

  • https://github.com/rust-lang/rust/pull/146923 (reflection MVP) has not been reviewed yet
Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

1 detailed update available.

Comment by @ranger-ross posted on 2025-12-23:

Status update December 23, 2025

The majority of December was spent iterating on https://github.com/rust-lang/cargo/pull/16155 . As mentioned in the previous update, the original locking design was not correct and we have been working through other solutions.

As locking is tricky to get right and there are many scenarios Cargo needs to support, we are trying to descope the initial implementation to an MVP, even if that means we lose some of the concurrency. Once we have an MVP on nightly, we can start gathering feedback on the scenarios that need improvement and iterate.

I'm hopeful that we get an unstable -Zfine-grain-locking on nightly in January for folks to try out in their workflows.


Also we are considering adding an opt-in for the new build-dir layout using an env var (CARGO_BUILD_DIR_LAYOUT_V2=true) to allow tool authors to begin migrating to the new layout. https://github.com/rust-lang/cargo/pull/16336

Before stabilizing this, we are doing crater run to test the impact of the changes and proactively reaching out to projects to minimize breakage as much as possible. https://github.com/rust-lang/rust/pull/149852

Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress Completed
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

No detailed updates available.
Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

1 detailed update available.

Comment by @jakos-sec posted on 2025-12-15:

Based on the gathered feedback I opened a new MCP for the proposed new Tier 2 targets with sanitizers enabled. (https://github.com/rust-lang/compiler-team/issues/951)

Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

No detailed updates available.
rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Kobzol posted on 2025-12-15:

We have enabled the second x64 machine, so we now have benchmarks running in parallel 🎉 There are some smaller things to improve, but next year we can move onto running benchmarks on Arm collectors.

Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-12-17:

Opened stabilization PR but we have blockers I didn't hear of, so stabilization will be postponed until then.

SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

3 detailed updates available.

Comment by @davidtwco posted on 2025-12-15:

I haven't made any progress on Deref::Target yet, but I have been focusing on landing rust-lang/rust#143924 which has went through two rounds of review and will hopefully be approved soon.

Comment by @nikomatsakis posted on 2025-12-18:

Update: David and I chatted on Zulip. Key points:

David has made "progress on the non-Sized Hierarchy part of the goal, the infrastructure for defining scalable vector types has been merged (with them being Sized in the interim) and that'll make it easier to iterate on those and find issues that need solving".

On the Sized hierarchy part of the goal, no progress. We discussed options for migrating. There seem to be three big options:

(A) The conservative-but-obvious route where the T: Derefin the old edition is expanded to T: Deref<Target: SizeOfVal> (but in the new edition it means T: Deref<Target: Pointee>, i.e., no additional bounds). The main downside is that new Edition code using T: Deref can't call old Edition code using T: Deref as the old edition code has stronger bounds. Therefore new edition code must either use stronger bounds than it needs or wait until that old edition code has been updated.

(B) You do something smart with Edition.Old code where you figure out if the bound can be loose or strict by bottom-up computation. So T: Deref in the old could mean either T: Deref<Target: Pointee> or T: Deref<Target: SizeOfVal>, depending on what the function actually does.

(C) You make Edition.Old code always mean T: Deref<Target: Pointee> and you still allow calls to size_of_val but have them cause post-monomorphization errors if used inappropriately. In Edition.New you use stricter checking.

Options (B) and (C) have the downside that changes to the function body (adding a call to size_of_val, specifically) in the old edition can stop callers from compiling. In the case of Option (B), that breakage is at type-check time, because it can change the where-clauses. In Option (C), the breakage is post-monomorphization.

Option (A) has the disadvantage that it takes longer for the new bounds to roll out.

Given this, (A) seems the preferred path. We discussed options for how to encourage that roll-out. We discussed the idea of a lint that would warn Edition.Old code that its bounds are stronger than needed and suggest rewriting to T: Deref<Target: Pointee> to explicitly disable the stronger Edition.Old default. This lint could be implemented in one of two ways

  • at type-check time, by tracking what parts of the environment are used by the trait solver. This may be feasible in the new trait solver, someone from @rust-lang/types would have to say.
  • at post-mono time, by tracking which functions actually call size_of_val and propagating that information back to callers. You could then compare against the generic bounds declared on the caller.

The former is more useful (knowing what parts of the environment are necessary could be useful for more things, e.g., better caching); the latter may be easier or more precise.

Comment by @nikomatsakis posted on 2025-12-19:

Update to the previous post.

Tyler Mandry pointed me at this thread, where lcnr posted this nice blog post that he wrote detailing more about (C).

Key insights:

  • Because the use of size_of_val would still cause post-mono errors when invoked on types that are not SizeOfVal, you know that adding SizeOfVal into the function's where-clause bounds is not a breaking change, even though adding a where clause is a breaking change more generally.
  • But, to David Wood's point, it does mean that there is a change to Rust's semver rules: adding size_of_val would become a breaking change, where it is not today.

This may well be the best option though, particularly as it allows us to make changes to the defaults across-the-board. A change to Rust's semver rules is not a breaking change in the usual sense. It is a notable shift.

Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

1 detailed update available.

Comment by @BoxyUwU posted on 2025-12-30:

This month I've written some documentation for how Const Generics is implemented in the compiler. This mostly covers the implementation of the stable functionality as the unstable features are quite in flux right now. These docs can be found here: https://rustc-dev-guide.rust-lang.org/const-generics.html

Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

This Week In RustThis Week in Rust 632

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is wgsl-bindgen, a binding generator for WGSL, the WebGPU shading language, to be used with wgpu.

Thanks to Artem Borisovskiy for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Rustup

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2026 | CFP closes 2026-01-18 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
  • RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

297 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Not a lot of changes this week. Overall result is positive, largely thanks to https://github.com/rust-lang/rust/pull/142881, which makes computing an expensive data structure for JumpThreading MIR optimization lazy.

Triage done by @panstromek. Revision range: e1212ea7..112a2742

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 1.7%] 11
Regressions ❌
(secondary)
0.2% [0.1%, 0.5%] 6
Improvements ✅
(primary)
-0.5% [-1.3%, -0.1%] 74
Improvements ✅
(secondary)
-0.6% [-1.8%, -0.2%] 71
All ❌✅ (primary) -0.4% [-1.3%, 1.7%] 85

2 Regressions, 0 Improvements, 3 Mixed; 1 of them in rollups 37 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Cargo, Rust, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-31 - 2026-01-28 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

what even is time?!?

Ralf Jung on his blog

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

This Week In RustThis Week in Rust 631

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is arcshift, an Arc replacement for read-heavy workloads that supports lock-free atomic replacement.

Thanks to rustkins for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2026 | CFP closes 2026-01-18 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
  • RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

475 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Very quiet week, with essentially no change in performance.

Triage done by @simulacrum. Revision range: 21ff67df..e1212ea7

1 Regression, 1 Improvement, 3 Mixed; 2 of them in rollups 36 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Cargo Compiler Team (MCPs only) Leadership Council

No Items entered Final Comment Period this week for Rust RFCs, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-24 - 2026-01-21 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

they should just rename unsafe to C so people can shut up

/u/thisismyfavoritename on /r/rust

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Tarek Ziadéall the code are belong to claude*

I have been writing code for a long time, long enough to be suspicious of tools that claim to fundamentally change how I work. And yet, here we are.

The latest iterations of Claude Code are genuinely impressive. Not in a flashy demo way, but in the quiet, dangerous way where you suddenly realize you have delegated large parts of your thinking to it. This post is about that experience, how Claude helped me build rustnn, what worked remarkably well, and where I had to consciously pull myself back.

Claude as a serious coding partner

For rustnn, I leaned heavily on Claude Code. The quality of the generated Rust was consistently high. Beyond producing correct syntax, it reasoned about what the code was supposed to do. It was context-aware in a way that made iterative design feel natural. I could ask for refactors, architectural changes, or alternative approaches, and get answers that actually respected the existing codebase and long-running tests.

This mirrors what many developers have been reporting toward the end of 2025. Claude Code’s agent-oriented design and large-context reasoning make it particularly strong for repository-wide work: multi-file refactors, non-trivial debugging sessions, and architectural changes that need to fit an existing mental model. Compared to Codex-style systems, which still shine for fast edits and local completions, Claude tends to perform better when the task requires sustained reasoning and understanding of project-wide constraints.

Anthropic’s recent Claude releases have reinforced that positioning. Improvements in long-context handling, reasoning depth, and agentic workflows make it easier to treat Claude as something closer to a collaborator than an autocomplete engine.

The turning point for me was when I stopped treating Claude like a chat bot and started treating it like a constrained agent.

That is where CLAUDE.md comes in.

Tuning CLAUDE.md

I stumbled upon an excellent LangChain article on how to turn Claude Code into a domain-specific coding agent.

It clicked immediately. Instead of repeatedly explaining the same constraints, goals, and conventions, I encoded them once. Rust style rules. Project intent. Explicit boundaries. How to react to test failures.

The effect was immediate. Output quality improved, and the amount of back-and-forth dropped significantly. Claude stopped proposing things that were clearly out of scope and started behaving like someone who had actually read and understood the project.

For rustnn, I went one step further and anchored development around WPT conformance tests. That gave both Claude and me a shared, objective target. Tests either pass or they do not. No bikeshedding.

Tweaking CLAUDE.md quickly revealed itself as a never-ending process. There are plenty of articles describing different approaches, and none of them are definitive. The current direction seems to be layering information across multiple files, structuring project documentation so it is optimized for agent consumption while remaining readable for humans, and doing so without duplicating the same knowledge in multiple places.

That balance turns out to be just as important as the model itself.

The slippery slope

There is a trap though, and it is a subtle one.

Once Claude is good enough, you start routing everything through it.

  • Re-running tests.
  • Interpreting obvious build errors.
  • Copying and pasting logs that you already understand.

It feels efficient, but it is not free. Each interaction has a cost, and when you are in a tight edit-build-test loop, those costs add up fast. Worse, you start outsourcing mechanical thinking that you should probably still be doing yourself.

I definitely fell into that trap.

Reducing costs

The solution, for me, was to drastically reduce how much I talk to Claude, and to stop using its prompt environment as a catch-all interface to the project.

Claude became an extra terminal. One I open for very specific tasks, then close. It is not a substitute for my own brain, nor for the normal edit–build–test loop.

Reducing the context window is also critical. A concrete example is Python tracebacks. They are verbose, repetitive, and largely machine-generated noise. Sending full tracebacks back to the model is almost always wasteful.

That is why I added a hook to rewrite them on the fly into a compact form.

The idea is simple: keep the signal, drop the boilerplate. Same information, far fewer tokens. In practice, this not only lowers costs, it often produces better answers because the model is no longer drowning in irrelevant frames and runtime noise. On Python-heavy codebases, this change alone reduced my usage costs by roughly 20%.

Pre-compacting inputs turned out to be one of the most effective cost-control strategies I have found so far, especially when combined with a more deliberate, intentional way of interacting with the model.

Memory across sessions actually matters

Another pain point is session amnesia. You carefully explain design decisions, trade-offs, and long-term goals, only to repeat them again tomorrow.

A well-crafted CLAUDE.md mitigates part of this problem. It works well for static knowledge: coding style, project constraints, architectural boundaries, and things that rarely change. It gives Claude a stable baseline and avoids a lot of repetitive explanations.

But it does not capture evolving context.

It does not remember why a specific workaround exists, which approach you rejected last week, or what subtle behavior a particular test exposed yesterday. As soon as the session ends, that knowledge is gone, and you are back to re-teaching the same mental model.

This is where cross-session, cross-project memory becomes interesting.

I am currently experimenting with claude-mem

The idea is simple but powerful: maintain a centralized, persistent memory that is automatically updated based on interactions. Instead of manually curating context, relevant facts, decisions, and preferences are summarized and carried forward. Over time, this builds a lightweight but durable understanding of how you work and how your projects evolve.

Compared to CLAUDE.md, this kind of memory is dynamic rather than declarative. It captures intent, not just rules. It also scales across projects, which matters when you jump between repositories that share design philosophy, tooling, or constraints.

It is still early, and it is not magic. You need to be careful about what gets remembered and how summaries are formed. But the direction feels right. Persistent memory reduces cognitive reset costs, shortens warm-up time, and makes the interaction feel less like starting over and more like continuing a conversation you paused yesterday.

That difference adds up.

Final thoughts

Claude Code is good. Very good. Good enough that you need discipline to use it well.

With a tuned CLAUDE.md, clear test-driven goals like WPT conformance, and some tooling to reduce noise and cost, it becomes a powerful accelerator. Without that discipline, it is easy to overuse it and slowly burn budget on things you already know how to do.

I do not think this replaces engineering skill. If anything, it amplifies both good and bad habits. The trick is to make sure it is amplifying the right ones.

References

*The title is a deliberate reference to “All your base are belong to us.” The grammar is broken on purpose. It is a joke, but also a reminder that when tools like Claude get this good, it is easy to give them more control than you intended

Mozilla Privacy BlogBehind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025)

Welcome to the blog series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents our commitment to advancing an open, global internet that gives people meaningful choice in their online experiences, promotes transparency and innovation and protects the public interest over private walled gardens. This blog series digs deeper on our vision for the web and the people who use it and how these goals are advanced in policymaking and technology. 

In 2025, global tech policy raced to keep up with technological change and opportunity. In the midst of this evolution, Mozilla sought to ensure that solutions remained centered on openness, competition and user agency.

From AI Agents and the future of the open web to watershed antitrust cases, competition debates surged. Efforts to drive leadership and innovation in AI led governments across the globe to evaluate priorities. Perennial privacy and security questions remained on the radar, with US states intensifying efforts to pass laws and the EU working to streamline rules on AI, cybersecurity and data. Debates amongst industry, civil society and policymakers reflected the intensity of these moments.

Just as we have for over 20 years, Mozilla showed up to build, convene, debate and advocate. It’s clear that more than ever, there must be urgency to truly put people first. Below are a selection of some key moments we’re reflecting on, as we head into 2026.

FEBRUARY 2025 

Mozilla Participates in Paris AI Action Summit as Part of the Steering Committee

Mozilla participated in the Paris AI Action Summit as Part of the Steering Committee with an ‘action packed’ schedule that included appearances on panels, a live recording of the podcast “Computer Says Maybe” and a reception to reflect on discussions and thank all the officials and researchers who had worked so hard to make the Summit a success.

Additionally, Mozilla and other partners, including Hugging Face, Microsoft and OpenAI, launched Robust Open Online Safety Tools (ROOST) at the Paris AI Action Summit. The entity is designed to create open source foundations for safer and more responsible AI development, ensuring that safety and transparency remain central to innovation.

The launch of ROOST happened at exactly the right time and in the right place. The Paris AI Action Summit provided a global backdrop for launching work that will ultimately help make AI safety a field that everyone can shape and improve.

Mozilla Event: AI & Competition featuring the President of the German Competition Authority
On February 12, we hosted a public event in Berlin on AI & competition, in partnership with German daily newspaper Tagesspiegel. Addressing the real risk of market concentration at various elements of the AI stack, the President of the German competition authority (Bundeskartellamt), Andreas Mundt, delivered a keynote address setting out his analysis of competition in AI and the role of his authority in ensuring contestable markets as technology rapidly evolves.

MARCH 2025 

America’s AI Action Plan

In March, Mozilla responded to the White House’s request for information on AI policy, urging policymakers to ensure that AI remained open, competitive and accountable. The comments also warned that concentrated control by a few tech giants threatened innovation and public trust, and called for stronger support of open source AI, public AI infrastructure, transparent energy use and workforce development. Mozilla underscored these frameworks are essential to building an AI ecosystem that serves the public interest rather than purely corporate bottom lines.

Mozilla Mornings: Promoting a privacy-preserving online ads ecosystem

The same month, we also hosted a special edition of Mozilla Mornings focused on the future of online advertising and the role Privacy-Enhancing Technologies (PETs) can play in reshaping it. The conversation came at a critical moment in Europe, amidst discussions on updating privacy legislation while enforcing existing rules.

The session brought together policymakers, technologists, and civil-society experts to examine how Europe can move toward a fairer and more privacy-respecting advertising ecosystem. Speakers explored the limitations of today’s surveillance-driven model and outlined how PETs and Privacy-Preserving Technologies (PPTs) could offer a viable alternative that protects users while sustaining the economic foundations of the open web. The event underscored Mozilla’s commitment to advancing privacy-respecting technologies and ensuring that both policy and technical design converge toward a healthier online advertising ecosystem.

MAY 2025 

CPDP: The Evolution of PETs in Digital Ads

At the Brussels 2025 International CPDP Conference, Mozilla organized and participated in a panel titled “The Evolution of PETs in Digital Ads: Genuine Privacy Innovation or Market Power Play?” The discussion explored how Privacy-Enhancing Technologies (PETs) — tools designed to minimize data collection and protect user privacy — are reshaping the digital advertising landscape. Panelists debated how to encourage genuine privacy innovation without reinforcing existing power structures, and how regulations like the GDPR and the Digital Markets Act (DMA) can help ensure PETs foster transparency and competition.

Competition in Focus: U.S. vs Google

The U.S. v. Google remedies trial was a defining moment — not just for 2025, but for the future of browser and search competition. While the remedies phase was about creating competition in the search market, some of the proposed remedies risked weakening independent browsers like Firefox, the very players that make real choice possible.

In early May, Mozilla’s CFO, Eric Muhlheim, testified to this very point. Muhlheim’s testimony, and Mozilla’s amicus brief in the case, spoke to the vital role of small, independent browsers in driving competition and innovation across the web and warned about the risks of harming their ability to select the search default that best serves their users. Ensuring a competitive search ecosystem while avoiding harm to browser competition remains an important issue in 2026.

JUNE 2025

Open by Design: How Nations Can Compete in the Age of AI 

The choices governments make today, about who gets to build, access and benefit from AI, will shape economic competitiveness, national security and digital rights for decades. In June, Mozilla supported a new report by the UK think tank Demos, exploring how and why embracing openness in key AI resources can spur innovation and adoption. Enabling safer, more transparent development and boosting digital sovereignty is a recipe, if there ever was one, for ‘winning’ at AI.

EU Digital Summit: Advocating for Open and Secure Digital Ecosystems

Digital competitiveness depends on open, secure, and interoperable ecosystems that foster innovation while respecting users’ rights. We spoke at the 2025 European Digital Summit—a flagship forum bringing together policymakers, regulators, industry leaders, and civil society—and argued that openness and security reinforce each other, that smart regulation has the potential to lower entry barriers and curb gatekeeping power, and that innovation does not require sacrificing privacy when incentives are aligned toward rights-respecting designs. The takeaway was clear: enforcing interoperability, safeguarding pro-competition rules, and embedding privacy-by-design incentives are essential to a resilient, innovative, and trustworthy open web.

JULY 2025

Joint Letter to the UK Secretary of State on DMCCA

When choice disappears, innovation stalls. In July, Mozilla sent an open letter to UK Ministers and the Competition & Markets Authority to urge faster implementation of the UK Digital Markets, Competition & Consumers Act (DMCCA). As an organisation that exists to create an internet that is open and accessible to all, Mozilla has long supported competitive digital markets. Since the EU Digital Markets Act took effect in 2024, users have begun to benefit from genuine choice for the first time, with interventions like browser choice screens offering people browser choice. The result? People are choosing independent alternatives to gatekeepers defaults: Firefox daily active users on iOS rose by 150% across the EU. The UK’s DMCCA could be similarly revolutionary for UK consumers and the many challenger businesses taking on market dominance.

SEPTEMBER 2025

Digital Bootcamp: Bringing Internet Architecture to the Heart of EU Policymaking

In September, Mozilla officially launched its Digital Bootcamp initiative, developed in partnership with Cloudflare, Proton and CENTR, to strengthen policymakers’ understanding of how the internet actually works and why this technical foundation is essential for effective regulation. We delivered interactive sessions across EU institutions, including a workshop for Members of the European Parliament, the European Commission, and representatives of the EU member states.

Across these workshops, we demystified the layered architecture of the internet, explained how a single website request moves through the stack, and clarified which regulatory obligations apply at each layer. By bridging the gap between engineering and policymaking, Digital Bootcamp is helping ensure EU digital laws remain grounded in technical reality, supporting evidence-based decisions that protect innovation, security and the long-term health of the open web.

OCTOBER 2025 

Mozilla Meetup: The Future of Competition

On October 8, Mozilla hosted a Meetup on Competition in Washington, D.C., bringing together leading voices in tech policy — including Alissa Cooper (Knight-Georgetown Institute), Amba Kak (AI Now Institute), Luke Hogg (Foundation for American Innovation) and Kush Amlani (Mozilla) — to discuss the future of browser competition, antitrust enforcement and AI’s growing influence on the digital landscape. Moderated by Bloomberg’s Leah Nylen, the event reinforced our ongoing efforts to establish a more open and competitive internet, highlighting how policy decisions in these areas directly shape user choice, innovation, and the long-term health of the open web.

Global Encryption Day

On October 21, Mozilla marked Global Encryption Day by reaffirming our commitment to strong encryption as a cornerstone of online privacy, security, and trust. For years, Mozilla has played an active role in shaping the broader policy debate on encryption by consistently pushing back against efforts to weaken it and working with partners around the world to safeguard the technology that helps to keep people secure online – from joining the Global Encryption Coalition Steering Committee, to challenging U.S. legislation like the EARN IT Act and leading multi-year efforts in the EU to address encryption risks in the eIDAS Regulation.

California’s Opt Me Out Act: A Continuation of the Fight For Privacy

The passage of California’s Opt Me Out Act (AB 566) marked a major step forward in Mozilla’s ongoing effort to strengthen digital privacy and give users control of their personal data. For years, Mozilla has spoken in support of Global Privacy Control (GPC) — a tool already integrated into Firefox — as a model for privacy-by-design solutions that can be both effective and user-friendly.

NOVEMBER 2025

Mozilla Submits Recommendations on the Digital Fairness Act

In November, Mozilla submitted its response to the European Commission’s consultation on the Digital Fairness Act (DFA), framing it as a key opportunity to modernise consumer protection for AI-driven and highly personalised digital services. Mozilla argued that effective safeguards must tackle both interface design and underlying system choices, prohibit harmful design practices, and set clear fairness standards for personalization and advertising. A well-designed DFA can complement existing EU laws, strengthen user autonomy, provide legal certainty for innovators, and support a more competitive digital ecosystem built on genuine user choice.

Mozilla hosts AI breakfast in UK Parliament

Mozilla President, Mark Surman, hosted MPs and Peers for a breakfast in Parliament to discuss how policymakers can nurture AI that supports public good. As AI policy moves from principle to implementation, the breakfast offered insight into the models, trade-offs and opportunities that will define the next phase of the UK’s AI strategy.

DECEMBER 2025

Mozilla Joins Tech Leaders at US House AI Caucus Briefing

Internet Works, an association of “Middle Tech” companies, organized a briefing with the Congressional AI Caucus. The goal was to provide members of congress and their staff a better understanding of the Middle Tech ecosystem and how smaller companies are adopting and scaling AI technologies. Mozilla spoke on the panel, lending valued technical expertise and setting out how we’re thinking about keeping the web open for innovation, competition and user choice with this new technology stack.

eIDAS2 Regulation: Defending Web Security and Trust

In December, the EU published the final implementing rules for eIDAS2, closing a multi-year fight over proposals that would have required browsers to automatically trust government-mandated website certificates—putting encryption, user trust, and the open web at risk. Through sustained advocacy and deep technical engagement, Mozilla helped secure clear legal safeguards preserving independent browser root programs and strong TLS security. We also ensured that the final standards respect existing security norms and reflect how the web actually works. With all rules now published, users can continue to rely on browsers to verify websites independently with strict security requirements, governments are prevented from weakening web encryption by default, and a dangerous global precedent for state-controlled trust on the internet has been avoided.

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla’s policy priorities.

The post Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025) appeared first on Open Policy & Advocacy.

Firefox NightlyClosing out 2025 Strong – These Weeks in Firefox: Issue 193

Highlights

  • The Desktop Integrations team is starting a controlled rollout of the new Backup feature, starting with users on Windows 10! This gives users more options and ability to move their data from machine to machine when getting new devices. (146 Release Notes)

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Aloys
  • Kipchumba Chelilim [:mrchumbastic]
  • Lorenz A

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As part of the final steps to migrate WebExtensions and AddonManager telemetry analyses and dashboards away from legacy telemetry, we have introduced a new addons Glean ping to submit the list of active add-ons and themes (on startup, on updates and every 24h) – Bug 2000866
    • NOTE: The glean addons.active_addons and addons.theme metrics will still be included in both the metrics and addons Glean pings. However the addons ping is expected to become the preferred source of data for add-ons related analyses, whereas the metrics ping may be better suited for analyses that correlate other metrics with the add-ons listed in the addons.active_addons metric.
WebExtension APIs
  • Fixed a regression related to nativeMessaging API performance, previously regressed in Firefox 138 by Bug 1945470, longer term fix to reduce messages roundtrip time for Chrome Workers landed in Nightly 147 and workaround disabling the DOM Workers timer manager through prefs uplifted to ESR 140 – Bug 2002517

DevTools

WebDriver

New Tab Page

Search and Navigation

Urlbar
  • Marco landed a patch to limit semantic history search to a list of supported locales (1992920)
  • Dao and Moritz continued their searchbar widget work that falls under the larger multi-context address bar project (2003799, 2003804, 2003043, 2002799, 2002377, 1983016, 2001082)
  • Dharma fixed a bug with the unified search button so that we show a different icon when the user is searching and their “keyword.enabled” pref is false (1922114)
  • Dharma also landed a patch about the messaging to be shown to users when they organically navigate to google.com, bing.com or duckduckgo.com (1982133)
  • James fixed a bug where autocomplete stopped working for certain URLs (2001284)
Firefox Suggest
  • Drew and Daisuke have completed the implementation of the Firefox Suggest online flight status result suggestions (1982135)
  • Drew fixed a flight result bug (2000873) and sports result bug (2000851) that affected screen readers
  • Drew made several improvements to how online Ad Marketplace suggestions are dismissed (2003624, 2002383)
  • Drew landed a patch that removes support for unmanaged Merino suggestions (2002141)
Places
  • Marco landed a patch to shrink the size of the favicons database by removing expired favicon relations (1959694)

Storybook/Reusable Components/Acorn Design System

  • Hanna Jones [:hjones] implemented a temporary moz-select icon rendering workaround in Toolkit so custom select options reliably show icons across platforms until a standards-compliant path lands.
    • This was to support search engines in SRD

Settings Redesign

Mozilla Privacy BlogAustralia’s Social Media Ban: Why Age Limits Won’t Fix What Is Wrong With Online Platforms

On December 10th, Australia’s controversial law banning access for under 16-year-olds to certain social media platforms entered into force. Since its adoption in 2024, the law has sparked a global debate on age verification online and has inspired governments across the world to restrict minors’ access to parts of the web.

At Mozilla, privacy and user empowerment have always formed a core part of our mission. Mozilla supports strong, proportionate safeguards for minors, but we caution against approaches that rely on invasive identity checks, surveillance-based enforcement, or exclusionary defaults. Such interventions rely on the collection of personal and sensitive data and, thus, introduce major privacy and security risks. By following an approach of abstinence and access control, they undermine the rights of young people to express themselves online but do little to address the child safety risks policymakers might seek to address, such as insufficient content moderation, irresponsible data practices, and addictive design.

Rather than simply restricting access to some online platforms, policymakers should focus on fixing the systemic issues at play and incentivize the creation of online spaces that benefit young people and their development.

We are therefore disappointed by the blunt and disproportionate approach taken by the Australian government. We are also concerned about the impact this law, and others like it, will have on online privacy and security, on people’s ability to express themselves and access information, and therefore on the health of the web itself.

The Australian law designates certain services as “age-restricted social media platforms”. This category includes social media platforms like Instagram and TikTok, and video-sharing platforms like YouTube, and excludes certain categories of services, such as messaging providers, email services, and online games. Designated services must ensure that people under 16 years of age do not have accounts on their platforms. To do so, the ages of all users must be determined.

The Australian law provides almost no guidance on how service providers should balance privacy, security, and the robustness of age assurance technologies when performing age checks. Providers are thus left to choose from bad options. In the UK, a similar approach has resulted in users having to entrust some of their most sensitive data to a plethora of newly emerged commercial age assurance providers in order to retain access to the various services they use. These actors often ask for a lot of information while providing little accountability or transparency about their data handling practices. Beyond serious data breaches, this has also led to users losing access to messaging features and the censorship of content deemed sensitive, such as posts about the situation in Gaza or the war in Ukraine. But UK users have also demonstrated how ineffective the age-gating mechanisms of even some of the largest platforms are, using VPNs and video game features to bypass age barriers easily.

While many technologies exist to verify, estimate, or infer users’ ages, fundamental tensions around effectiveness, accessibility, privacy, and security have not been resolved. Rather, the most common forms of age assurance technologies all come with their own significant limitations:

  • Age estimation refers to AI-based systems that estimate a user’s age, usually based on biometric data like facial images. They may perform well in placing users within broad age bands, but often struggle at key legal thresholds, such as distinguishing between 15 and 16 years old. More troubling are equity concerns: facial estimation systems often underperform for people with darker skin tones, women, and those with non-binary or non-normative facial features due to biased or limited training datasets.
  • Age inference models are based on vast amounts of user data, such as browsing histories, to infer a user’s age. Similar limitations as for biometric age estimation apply: determining a user’s exact age along legal thresholds is challenging, and users exhibiting unusual behaviors might be profiled as younger or older than they are.
  • Age verification usually refers to verifying someone’s age by comparing it to a form of government-issued ID. This approach might lead to more precise outcomes, but risks excluding millions of people without access to government ID – many of them minors themselves. It also forces people to share some of their most sensitive data with private companies, where it will be at risk of surveillance, repurposing, or access by law enforcement. Zero-knowledge proofs (ZKPs) – a cryptographic way to prove whether a statement like “I am older than 18” is true without revealing one’s exact age – can help people limit what information they share. Deploying a ZKP-based system that meets goals for veracity and privacy requires considerably more development in both technical and governance aspects than any government has been willing to support. Beyond technical investment, clear frameworks to limit the information companies can collect are needed.

The Australian approach sends a worrying signal: That mandatory age verification and blanket bans are magical solutions to complex societal challenges, regardless of their implications for fundamental rights online. We are convinced, however, that there are rights-respecting alternatives policymakers can pursue to empower young people online and improve their safety and well-being:

  • Young people have a right to privacy, safety and expression, as put forward by the UN Convention on the Rights of the Child. Policymakers should adopt a child rights-based approach to online safety that balances young people’s protection with their rights to societal participation, free expression, and access to media and information, and should adopt policies to allow young people to benefit from responsibly run online services.
  • Blanket age-based bans and mandatory age verification should be rejected. Instead, parents and caregivers should be empowered to set age-appropriate limitations for their child on their devices. Policymakers should implement programs that focus on supporting children by providing them with the tools to learn to manage online risks as they grow and learn. This includes developing tools that parents, guardians, and schools can use in teaching and supervision in support of children as they grow.
  • Rather than banning young people from accessing certain platforms, policymakers should create incentives, enforce existing laws and close regulatory gaps where necessary to address problematic practices that put all social media users’ privacy, wellbeing, and security at risk. This includes extractive data practices and profiling, manipulative advertising, addictive design and dark patterns, and other harmful practices.

In Australia and elsewhere, we are committed to work alongside policymakers to advance meaningful protections for everyone online, while upholding fundamental rights, accessibility and user choice.

With special thanks to Martin Thomson, Distinguished Engineer at Mozilla, for his contributions to this blog. 

The post Australia’s Social Media Ban: Why Age Limits Won’t Fix What Is Wrong With Online Platforms appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogWhat do people love about Rust?

Rust has been named Stack Overflow's Most Loved (now called Most Admired) language every year since our 1.0 release in 2015. That means people who use Rust want to keep using Rust1--and not just for performance-heavy stuff or embedded development, but for shell scripts, web apps, and all kinds of things you wouldn't expect. One of our participants captured it well when they said, "At this point, I don't want to write code in any other language but Rust."

When we sat down to crunch the vision doc data, one of the things we really wanted to explain was: What is it that inspires that strong loyalty to Rust?2 Based on the interviews, the answer is at once simple and complicated. The short version is that Rust empowers them to write reliable and efficient software. If that sounds familiar, it should: it's the slogan that we have right there on our web page. The more interesting question is how that empowerment comes about, and what it implies for how we evolve Rust.

What do people appreciate about Rust?

The first thing we noticed is that, throughout every conversation, no matter whether someone is writing their first Rust program or has been using it for years, no matter whether they're building massive data clusters or embedded devices or just messing around, there are a consistent set of things that they say they like about Rust.

The first is reliability. People love that "if it compiles, it works" feeling:

"What I really love about Rust is that if it compiles it usually runs. That is fantastic, and that is something that I'm not used to in Java." -- Senior software engineer working in automotive embedded systems

"Rust is one of those languages that has just got your back. You will have a lot more sleep and you actually have to be less clever." -- Rust consultant and open source framework developer

Another, of course, is efficiency. This comes up in particular at the extremes, both very large scale (data centers) and very small scale (embedded):

"I want to keep the machine resources there for the [main] computation. Not stealing resources for a watchdog." -- Software engineer working on data science platforms

"You also get a speed benefit from using Rust. For example, [..] just the fact that we changed from this Python component to a Rust component gave us a 100fold speed increase." -- Rust developer at a medical device startup

Efficiency comes up particularly often when talking to customers running "at-scale" workloads, where even small performance wins can translate into big cost savings:

"We have a library -- effectively it's like an embedded database -- that we deploy on lots of machines. It was written in Java and we recently rewrote it from Java to Rust and we got close to I think 9x to 10x performance wins." -- Distinguished engineer working on cloud infrastructure services

"I'm seeing 4x efficiency in the same module between Java code that loads a VM and Rust. That's a lot of money you save in data center cost." -- Backend engineering company founder specializing in financial services

At the other end of the spectrum, people doing embedded development or working at low-levels of abstraction highlight Rust's ability to give low-level control and access to system details:

"Rust was that replacement for C I'd been looking for forever." -- Backend engineering company founder specializing in financial services

"If you're going to write something new and you do kind of low-level systemsy stuff, I think Rust is honestly the only real choice." -- Distinguished engineer

Many people cite the importance of Rust's supportive tooling, which helps them get up and going quickly, and in particular the compiler's error messages:

"I think a big part of why I was able to succeed at learning Rust is the tooling. For me, getting started with Rust, the language was challenging, but the tooling was incredibly easy." -- Executive at a developer tools company

"The tooling really works for me and works for us. The number one way that I think I engage with Rust is through its tooling ecosystem. I build my code through Cargo. I test it through Cargo. We rely on Clippy for everything." -- Embedded systems engineer working on safety-critical robotics

"I think the error messages and suggestions from the Rust compiler are super helpful also." -- Professor specializing in formal verification

Finally, one of Rust's most important virtues is its extensibility. Both in the language itself and through the crates.io ecosystem, Rust is designed to let end-users create libraries and abstractions that meet their needs:

"The crate ecosystem combined with the stability guarantees and the semantic versioning mean that it's the best grab and go ecosystem I've ever seen." -- Computer science professor and programming language designer

"I think proc macros are a really big superpower for Rust." -- Creator and maintainer of Rust networking libraries

"Rust is incredibly good at making it very very easy to get started, to reuse things, just to experiment quickly with new tools, new libraries, all the rest of it... so for me, as an experimentation platform, it's great." -- Rust expert and consultant focused on embedded and real-time systems

But what they love is the sense of empowerment and versatility

Reliability, efficiency, tooling, ecosystem—these are all things that people appreciate about Rust. But what they love isn't any one of those things. It's the way the combination makes Rust a trusted, versatile tool that you can bring to virtually any problem:

"When I got to know about it, I was like 'yeah this is the language I've been looking for'. This is the language that will just make me stop thinking about using C and Python. So I just have to use Rust because then I can go as low as possible as high as possible." -- Software engineer and community organizer in Africa

"I wanted a language that works well from top to bottom in a stacking all the way from embedded to very fancy applications" -- Computer science professor and programming language designer

"If [Rust] is going to try and sort of sell itself more in any particular way, I would probably be saying high performance, highly expressive, general purpose language, with the great aspect that you can write everything from the top to the bottom of your stack in it." -- Rust expert and consultant focused on embedded and real-time systems

Each piece is necessary for the whole to work

Take away the reliability, and you don't trust it: you're second-guessing every deployment, afraid to refactor, hesitant to let junior developers touch the critical paths.

"Rust just lowers that bar. It's a lot easier to write correct Rust code. As a leader on the team, I feel a lot safer when we have less experienced engineers contributing to these critical applications." -- Distinguished engineer working on cloud infrastructure services

"My experience with writing Rust software tends to be once you've got it working, it stays working. That's a combination of a lot of care taken in terms of backwards compatibility with the language and a lot of care taken around the general ecosystem." -- Rust expert and consultant focused on embedded and real-time systems

Reliability also provides guardrails that help people enter new domains—whether you're a beginner learning the ropes or an expert venturing into unfamiliar territory:

"Rust introduces you to all these things, like match and all these really nice functional programming methods." -- Software engineer with production Rust experience

"I think Rust ownership discipline is useful both for regular Rust programmers and also for verification. I think it allows you to within the scope of your function to know very clearly what you're modifying, what's not being modified, what's aliased and what's not aliased." -- Professor specializing in formal verification

"I discovered Rust... and was basically using it just to give myself a little bit more confidence being like a solo firmware developer" -- Software engineer working on automotive digital cockpit systems

Take away the efficiency and low-level control, and there are places you can't go: embedded systems, real-time applications, anywhere that cost-per-cycle matters.

"The performance in Rust is nutty. It is so much better and it's safe. When we rewrote C++ and C libraries or C applications into Rust, they would end up being faster because Rust was better at laying out memory." -- Senior Principal Engineer leading consumer shopping experiences

"9 times out of 10, I write microcontroller code and I only test it through unit testing. I put it on real hardware and it just works the first time." -- Embedded systems engineer working on safety-critical robotics

"I can confidently build systems that scale." -- Engineering manager with 20 years experience in media and streaming platforms

Take away the tooling and ecosystem, and you can't get started: or you can, but it's a slog, and you never feel productive.

"For me, getting started with Rust, the language was challenging, but the tooling was incredibly easy... I could just start writing code and it would build and run, and that to me made a huge difference." -- Founder and CEO of company creating developer tools

"Cargo is an amazing package manager. It is probably the best one I've ever worked with. I don't think I ever run into issues with Cargo. It just works." -- Software engineer with production Rust experience

"The Rust compiler is fantastic at kind of the errors it gives you. It's tremendously helpful in the type of errors it produces for it. But not just errors, but the fact it also catches the errors that other languages may not catch." -- Distinguished engineer working on cloud infrastructure services

The result: Rust as a gateway into new domains

When all these pieces come together, something interesting happens: Rust becomes a gateway into domains that would otherwise be inaccessible. We heard story after story of people whose careers changed because Rust gave them confidence to tackle things they couldn't before:

"I was civil engineering and I studied front-end development on my own, self taught. I had no computer background. I got interested in Rust and distributed systems and designs and systems around it. I changed my major, I studied CS and Rust at the same time." -- Software engineer transitioning to cryptography research

"I've been working with arbitrary subsidiaries of [a multinational engineering and technology company] for the last 25 years. Always doing software development mostly in the Java space... two years ago I started peeking into the automotive sector. In that context it was a natural consequence to either start working with C++ (which I did not want to do) or take the opportunity to dive into the newly established Rust ecosystem." -- Senior software engineer working in automotive embedded systems

"I started in blockchain. Currently I'm doing something else at my day job. Rust actually gave me the way to get into that domain." -- Rust developer and aerospace community leader

"Before that, I had 10 years of programming on some dynamic programming languages, especially Ruby, to develop web applications. I wanted to choose some language which focuses on system programming, so I chose Rust as my new choice. It is a change of my career." -- Rust consultant and author working in automotive systems and blockchain infrastructure

But the balance is crucial

Each of Rust's attributes are necessary for versatility across domains. But when taken too far, or when other attributes are missing, they can become an obstacle.

Example: Complex APIs and type complexity

One of the most powerful aspects of Rust is the way that its type system allows modeling aspects of the application domain. This prevents bugs and also makes it easier for noobs to get started3:

"Instead of using just a raw bit field, somebody encoded it into the type system. So when you'd have a function like 'open door', you can't pass an 'open door' if the door's already open. The type system will just kick that out and reject it." -- Software engineer working on automotive digital cockpit systems

"You can create contracts. For example, when you are allowed to use locks in which order." -- Senior embedded systems engineer working on automotive middleware development

The problem though is that sometimes the work to encode those invariants in types can create something that feels more complex than the problem itself:

"When you got Rust that's both async and generic and has lifetimes, then those types become so complicated that you basically have to be some sort of Rust god in order to even understand this code or be able to do it." -- Software engineer with production Rust experience

"Instead of spaghetti code, you have spaghetti typing" -- Platform architect at automotive semiconductor company

"I find it more opaque, harder to get my head around it. The types describe not just the interface of the thing but also the lifetime and how you are accessing it, whether it's on the stack or the heap, there's a lot of stuff packed into them." -- Software engineer working on data science platforms

This leads some to advocate for not using some of Rust's more complex features unless they are truly needed:

"My argument is that the hard parts of Rust -- traits, lifetimes, etc -- are not actually fundamental for being productive. There's a way to set up the learning curve and libraries to onboard people a lot faster." -- Creator and maintainer of Rust networking libraries

Example: Async ecosystem is performant but doesn't meet the bar for supportiveness

Async Rust has fueled a huge jump in using Rust to build network systems. But many commenters talked about the sense that "async Rust" was something altogether more difficult than sync Rust:

"I feel like there's a ramp in learning and then there's a jump and then there's async over here. And so the goal is to get enough excitement about Rust to where you can jump the chasm of sadness and land on the async Rust side." -- Software engineer working on automotive digital cockpit systems

"My general impression is actually pretty negative. It feels unbaked... there is a lot of arcane knowledge that you need in order to use it effectively, like Pin---like I could not tell you how Pin works, right?" -- Research software engineer with Rust expertise

For Rust to provide that "trusted tool that will help you tackle new domains" experience, people need to be leverage their expectations and knowledge of Rust in that new domain. With async, not only are there missing language features (e.g., async fn in traits only became available last year, and still have gaps), but the supportive tooling and ecosystem that users count on to "bridge the gap" elsewhere works less well:

"I was in favor of not using async, because the error messages were so hard to deal with." -- Desktop application developer

"The fact that there are still plenty of situations where you go that library looks useful, I want to use that library and then that immediately locks you into one of tokio-rs or one of the other runtimes, and you're like that's a bit disappointing because I was trying to write a library as well and now I'm locked into a runtime." -- Safety systems engineer working on functional safety for Linux

"We generally use Rust for services, and we use async a lot because a lot of libraries to interact with databases and other things are async. The times when we've had problems with this is like, um, unexplained high CPU usage, for example. The only really direct way to try to troubleshoot that or diagnose it is like, OK, I'm going to attach GDB and I'm gonna try to see what all of the threads are doing. GDB is -- I mean, this is not Rust's fault obviously -- but GDB is not a very easy to use tool, especially in a larger application. [..] And with async, it's, more difficult, because you don't see your code running, it's actually just sitting on the heap right now. Early on, I didn't actually realize that that was the case." -- Experienced Rust developer at a company using Rust and Python

Async is important enough that it merits a deep dive. Our research revealed a lot of frustration but we didn't go deep enough to give more specific insights. This would be a good task to be undertaken by the future User Research team (as proposed in our first post).

Example: The wealth of crates on crates.io are a key enabler but can be an obstacle

We mentioned earlier how Rust's extensibility is part of how it achieves versatility. Mechanisms like overloadable operators, traits, and macros let libraries create rich experiences for developers; a minimal standard library combined with easy package management encourage the creation of a rich ecosystem of crates covering needs both common and niche. However, particularly when people are first getting started, that extensibility can come at the cost of supportiveness, when the "tyranny of choice" becomes overwhelming:

"The crates to use are sort of undiscoverable. There's a layer of tacit knowledge about what crates to use for specific things that you kind of gather through experience and through difficulty. Everyone's doing all of their research." -- Web developer and conference speaker working on developer frameworks

"Crates.io gives you some of the metadata that you need to make those decisions, but it's not like a one stop shop, right? It's not like you go to crates.io and ask 'what I want to accomplish X, what library do I use'---it doesn't just answer that." -- Research software engineer

The Rust org has historically been reluctant to "bless" particular crates in the ecosystem. But the reality is that some crates are omnipresent. This is particular challenging for new users to navigate:

"The tutorial uses Result<Box<dyn Error>> -- but nobody else does. Everybody uses anyhow-result... I started off using the result thing but all the information I found has example code using anyhow. It was a bit of a mismatch and I didn't know what I should do." -- Software engineer working on data science platforms

"There is no clear recorded consensus on which 3P crates to use. [..] Sometimes it's really not clear---which CBOR crate do you use?[..] It's not easy to see which crates are still actively maintained. [..] The fact that there are so many crates on crates.io makes that a little bit of a risk." -- Rust team from a large technology company

Recommendations

Enumerate Rust's design goals and integrating them into our processes

We recommend creating an RFC that defines the goals we are shooting for as we work on Rust. The RFC should cover the experience of using Rust in total (language, tools, and libraries). This RFC could be authored by the proposed User Research team, though it's not clear who should accept it — perhaps the User Research team itself, or perhaps the leadership council.

This post identified how the real "empowering magic" of Rust arises from achieving a number of different attributes all at once -- reliability, efficiency, low-level control, supportiveness, and so forth. It would be valuable to have a canonical list of those values that we could collectively refer to as a community and that we could use when evaluating RFCs or other proposed designs.

There have been a number of prior approaches at this work that we could build on (e.g., this post from Tyler Mandry, the Rustacean Principles, or the Rust Design Axioms). One insight from our research is that we don't need to define which values are "most important". We've seen that for Rust to truly work, it must achieve all the factors at once. Instead of ranking, it may help to describe how it feels when you:

  • Don't achieve it (too little)
  • Get it right (the sweet spot)
  • Go overboard (too much)

This "goldilocks" framing helps people recognize where they are and course-correct, without creating false hierarchies.

Double down on extensibility

We recommend doubling down on extensibility as a core strategy. Rust's extensibility — traits, macros, operator overloading — has been key to its versatility. But that extensibility is currently concentrated in certain areas: the type system and early-stage proc macros. We should expand it to cover supportive interfaces (better diagnostics and guidance from crates) and compilation workflow (letting crates integrate at more stages of the build process).

Rust's extensibility is a big part of how Rust achieves versatility, and that versatility is a big part of what people love about Rust. Leveraging mechanisms like proc macros, the trait system, and the borrow checker, Rust crates are able to expose high-level, elegant interfaces that compile down to efficient machine code. At its best, it can feel a bit like magic.

Unfortunately, while Rust gives crates good tools for building safe, efficient abstractions, we don't provide tools to enable supportive ones. Within builtin Rust language concepts, we have worked hard to create effective error messages that help steer users to success; we ship the compiler with lints that catch common mistakes or enforce important conventions. But crates benefit from none of this. RFCs like RFC #3368, which introduced the diagnostic namespace and #[diagnostic::on_unimplemented], Rust has already begun moving in this direction. We should continue and look for opportunities to go further, particularly for proc-macros which often create DSL-like interfaces.

The other major challenge for extensibility is concerned with the build system and backend. Rust's current extensibility mechanisms (e.g., build.rs, proc-macros) are focused on the early stages of the compilation process. But many extensions to Rust, ranging from interop to theorem proving to GPU programming to distributed systems, would benefit from being able to integrate into other stages of the compilation process. The Stable MIR project and the build-std project goal are two examples of this sort of work.

Doubling down on extensibility will not only make current Rust easier to use, it will enable and support Rust's use in new domains. Safety Critical applications in particular require a host of custom lints and tooling to support the associated standards. Compiler extensibility allows Rust to support those niche needs in a more general way.

Help users get oriented in the Rust ecosystem

We recommend finding ways to help users navigate the crates.io ecosystem. Idiomatic Rust today relies on custom crates for everything from error-handling to async runtimes. Leaning on the ecosystem helps Rust to scale to more domains and allows for innovative new approaches to be discovered. But finding which crates to use presents a real obstacle when people are getting started. The Rust org maintains a carefully neutral stance, which is good, but also means that people don't have anywhere to go for advice on a good "starter set" crates.

The right solution here is not obvious. Expanding the standard library could cut off further experimentation; "blessing" crates carries risks of politics. But just because the right solution is difficult doesn't mean we should ignore the problem. Rust has a history of exploring creative solutions to old tradeoffs, and we should turn that energy to this problem as well.

Part of the solution is enabling better interop between libraries. This could come in the form of adding key interop traits (particularly for async) or by blessing standard building blocks (e.g., the http crate, which provides type definitions for HTTP libraries). Changes to coherence rules can also help, as the current rules do not permit a new interop trait to be introduced in the ecosystem and incrementally adopted.

Conclusion

To sum up the main points in this post:

  • What people love about Rust is the way it empowers them to tackle tough problems and new domains. This is not the result of any one attribute but rather a careful balancing act between many; if any of them are compromised, the language suffers significantly.
  • We make three recommendations to help Rust continue to scale across domains and usage levels
    • Enumerate and describe Rust's design goals and integrate them into our processes, helping to ensure they are observed by future language designers and the broader ecosystem.
    • Double down on extensibility, introducing the ability for crates to influence the develop experience and the compilation pipeline.
    • Help users to navigate the crates.io ecosystem and enable smoother interop
  1. In 2025, 72% of Rust users said they wanted to keep using it. In the past, Rust had a way higher score than any other language, but this year, Gleam came awfully close, with 70%! Good for them! Gleam looks awesome--and hey, good choice on the fn keyword. ;)

  2. And, uh, how can we be sure not to mess it up?

  3. ...for experienced devs operating on less sleep, who do tend to act a lot like noobs.

The Mozilla BlogWelcoming John Solomon as Mozilla’s new Chief Marketing Officer

Mozilla has always believed that technology should serve people — not the other way around. As we enter a moment of rapid change in how people experience the internet and AI, we’re focused on building products that are private, transparent, and put people in control. Today, we’re excited to take an important step forward in that work by welcoming John Solomon as Mozilla’s new Chief Marketing Officer.

Solomon joins Mozilla this week and will lead our global marketing and communications teams. His arrival marks the next chapter in strengthening how we tell Mozilla’s story and how we bring our values to life in the products millions of people rely on every day.

Bringing more than two decades of experience building category-defining brands and leading global marketing teams, Solomon is a veteran brand builder with leadership roles at Therabody, Apple, and Beats by Dre. He was also named one of Forbes’ 50 Most Entrepreneurial CMOs for 2025. Solomon has a track record of turning products into cultural touchpoints and brands into household names. This experience is essential as Mozilla works to remind hundreds of millions of people around the world that they have real choice in the technology they use.

Solomon’s career spans companies that have shaped culture as much as they have shaped markets. At Therabody, he helped redefine and scale the company into a category-leading wellness brand with a mission to help people live healthier, happier lives longer. At Beats, he played a pivotal role in the brand’s global rise and its breakthrough cultural relevance, later joining Apple’s worldwide Marcom organization to launch some of the company’s most iconic hardware, software, and digital services. Earlier in his career, he founded and sold enoVate, a consumer insights and strategy firm based in Shanghai.

For Mozilla, John steps into the role at a moment when trust in technology is eroding and AI is reshaping how people navigate the internet. Our responsibility — and our opportunity — is to build products that are private, transparent, and put people in control. Marketing plays a central role in making that mission visible, accessible, and relevant to a global audience. John not only understands the importance of this moment but the impact it will have on future generations.

Solomon will lead Mozilla’s global marketing and communications teams, working closely with leaders across the company to build on the strong progress made this year.

Mozilla’s mission has always been to ensure the internet remains open, accessible, and driven by human agency. As we enter a new era shaped by AI and renewed debates over consumer agency, John’s experience — and his commitment to purpose-driven work — will help us meet this moment with clarity and ambition.

Please join us in welcoming John Solomon to Mozilla.

The post Welcoming John Solomon as Mozilla’s new Chief Marketing Officer appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird 2025 Review: Building Stronger for the Future

2025 was an exciting year for Thunderbird. Many improvements were shipped throughout the year, from faster updates with a new release cadence, to a modernized codebase for the desktop app. We made big strides on our mobile apps and introduced the upcoming Thunderbird Pro to the world.

As we wrap up the year, a huge thank you to our community and volunteer contributors, and to our donors whose financial support keeps the lights on for the dedicated team working on Thunderbird. Here’s what we accomplished in 2025 and what’s to come in the new year.

A Stronger Core, Built to Last

This year marked the release of Thunderbird 140 “Eclipse”, our latest Extended Support Release. Eclipse was more than a visual refresh. It was a deep clean of Thunderbird’s core, removing long standing technical debt and modernizing large parts of the codebase.

The result is a healthier foundation that allows us to ship improvements more reliably and more often. Features like the new Account Hub, accessibility improvements, and cleaner visual controls are all part of this effort. They may look simple on the surface, but they represent significant behind the scenes progress that sets Thunderbird up for the long term.

Faster Updates, Delivered Monthly

Speaking of faster updates, in 2025 monthly releases became the default for Thunderbird desktop. This was a major shift from our focus on the annual cadence centered around the Extended Support Release.

Moving to monthly releases means new features land sooner, bug fixes arrive faster, and updates feel smoother instead of disruptive. Users no longer have to wait an entire year to benefit from improvements. Thunderbird now evolves continuously while maintaining the stability people expect.

Thunderbird Meets Exchange

One of the most requested features is finally here. Native Microsoft Exchange email support landed in Thunderbird’s monthly release channel with 145.0.

You can now connect Exchange accounts directly without relying on third party add-ons for email. Setup is simpler, syncing is more reliable, and Thunderbird works more naturally in Exchange based environments. Calendar and address book support are still in progress, but native email support marks an important milestone toward broader compatibility.

Mobile Moves Forward

Thunderbird’s mobile story continued to grow in 2025.

On Android, the team refined release processes, improved core experiences, and began breaking larger features into smaller, more frequently delivered updates. At the same time, Thunderbird for iOS took a major step forward with a basic developer testing app available via Apple TestFlight. This marked the first public signal that Thunderbird is officially expanding onto iOS, with active development well underway and headed toward iPhones in 2026.

Introducing Thunderbird Pro

In 2025, we announced Thundermail and Thunderbird Pro, the first ever email service from Thunderbird alongside new cloud based productivity features designed to work seamlessly with the app.

Thunderbird Pro will include:

  • Thundermail, an open source email service from Thunderbird
  • Appointment, a scheduling tool
  • Send, an end to end encrypted file sharing service

These services are built to respect user privacy, remain open source, and offer additional functionality by subscription for those who need it, without compromising the forever free and powerful Thunderbird desktop and mobile apps. Throughout the year, we made significant progress across all three services and launched the Thunderbird Pro website, marking a major step toward early access and testing. The Early Bird beta is set to kick off in the first part of 2026. Catch up on the full details in our latest update and, if you’re not on the waitlist yet, join in.

Looking Ahead to 2026

The work in 2025 set the stage for an even more ambitious year ahead.

In 2026, our desktop plans include updating our decades-old database, expanding Exchange and protocol support, and refreshing the Calendar UI. For Thunderbird Pro, we aim to release the Early Bird beta in the first part of the year. Our plans for Android focus on rearchitecture of old code, quality of life improvements, and a new UI. For iOS, we’re moving closer to an initial beta release with expanded protocol support. Be sure to follow this blog for updates on the desktop and mobile apps and Thunderbird Pro.

Thunderbird is moving faster, reaching more platforms, and building a more complete ecosystem while staying true to our values. Thanks for being part of the journey, and wishing all of you a fantastic 2026.
Thunderbird is moving faster, reaching more platforms, and building a more complete ecosystem while staying true to our values.

Thanks for being part of the journey, and wishing all of you a fantastic 2026!

All of our work is funded solely by individual donations from our users and community.
Help support the future of Thunderbird!
For other ways to get involved with the Thunderbird project, visit our participate page.

The post Thunderbird 2025 Review: Building Stronger for the Future appeared first on The Thunderbird Blog.

Mozilla Localization (L10N)Contributor Spotlight: Andika

About You

My name is Andika. I’m from Indonesia, and I speak Indonesian, Javanese, and English. I’ve been contributing to Mozilla localization for a long time, long enough that I don’t clearly remember when I started. I mainly focus on Firefox and Thunderbird, but I also contribute to many other open source projects.

Exploring Padar Island where Komodo dragons can be spotted.

Exploring Padar Island where Komodo dragons can be spotted.

Contributing to Mozilla Localization

Q: Can you tell us a bit about your background and how you found localization?

A: I started my open source journey in the 1990s. Early on, I helped others through mailing lists by troubleshooting problems and answering questions. I also tried filing bugs and maintaining packages, but over time I felt those contributions didn’t always have a lasting impact.

Around 2005, I started translating open source software. Translation felt different — it felt like a contribution that could last longer than the technology itself. When I saw poor translation quality online, I felt I could do better, and that motivated me to get involved. Localization became the most meaningful way for me to give back.

Q: What does your contribution to Mozilla localization look like today?

A: I primarily work on Firefox and Thunderbird. Over the years, I’ve translated tens of thousands of strings although some of those strings no longer exist in the codebase and remain only in translation memory. I also contribute to many other open source organizations, but Mozilla remains one of my main areas of focus.

Even though I don’t always use the products I localize — my professional work involves backend work, a lot of remote troubleshooting and maintenance — I stay connected to the quality of the translations through community collaboration and shared practices.

Workflow, Habits, and Collaboration

Q: How do you approach your localization work and collaborate with others?

A: Most of my localization work happens incrementally. I often carry unfinished translation files on my laptop so I can continue working offline, especially when the internet connection isn’t reliable. When I have multiple modules to choose from, I usually start with the ones that have the fewest untranslated strings. Seeing a module reach full translation gives me a lot of satisfaction.

To avoid burnout, I set small, realistic goals, sometimes something as simple as translating 50 strings before switching to another task. I tend to use small pockets of free time throughout the day, like waiting at a public transportation station or an appointment, and those fragments add up.

Collaboration plays a big role in maintaining quality. Within the Indonesian localization community, we use Telegram to discuss difficult or new terms and work toward consensus. Terminology and style guides are maintained together; it’s not a one-person responsibility.

I’ve also worked on localization in other projects like GNOME, where we translate module by module, we review each other’s work, and then commit changes as a group. Compared to Pontoon’s string-by-string approach, this workflow offers more flexibility, especially when working offline.

Perspective Across Open Source and Beyond

Q: You contribute to many open source projects. How does Mozilla localization compare, and what would you like to see improved?

A: For Indonesian localization, Mozilla is the most organized team I’ve worked with and has the largest active team. Some projects may appear larger on paper, but active participation matters more than numbers, and that’s where Mozilla really stands out.

One improvement I’d like to see is better support for offline translation in Pontoon. Another area is shortcut conflict detection — translators often can’t easily see whether keyboard shortcuts conflict unless all menu items or dialog elements are rendered together. Automated checks or rendered views of translated dialogs would make that process much easier.

That said, one thing Pontoon does very well, and that other projects could learn from, is the improving quality of online and AI-assisted translation suggestions.

Speaking at Fosdem in February 2024 on “Long Term Effort to Keep Translations Up-To-Date”

Professional Life and a Personal Note

Q: What do you do professionally, and how does it connect with your localization work?

A: I work as an IT security consultant. I started using a PC in 1984, learning to program in BASIC, Pascal, FORTRAN, Assembly, and C. C is my most favorite language up to now. I also tried various OSes from CP/M, DOS, OS/2, VMS, Netware, Windows, SCO, Solaris, then fell in love with Linux. I have been using Debian since version 1.3. Later I changed my focus from programming into IT security. My job requires staying up to date with security concepts and terminology, which helps when translating security-related strings. At the same time, localization sometimes introduces me to features I might later use professionally. The two areas complement each other in unexpected ways.

As for something more personal: I hate horror movies, I love cats, and I’ve had the chance to witness the rise and fall of many technologies over the years. I also maintain a personal wiki to keep track of my open source work though I keep telling myself I need to migrate it to GitHub one day.

Tarek ZiadéWhy Open Source Is Fundamental in AI (Essay)

Artificial intelligence is becoming a foundational layer of modern software. It is no longer confined to research labs, but embedded directly in everyday tools and user experiences.

As AI moves closer to users, openness becomes a question of power. Who can inspect these systems? Who can adapt them? And who ultimately controls how they evolve?

The web offers a useful reference point. Open source software and open standards turned the World Wide Web into shared infrastructure rather than a proprietary stack owned by a single company or government, even if many tried to enclose parts of it. That openness was not accidental. It shaped who could participate, compete, and be held accountable.

What Open Source AI Enables

Open source AI is often reduced to code availability. In practice, and as the Open Source Initiative (OSI) emphasizes in its Open Source AI Definition, it is about concrete freedoms.

An open source AI system can be used, studied, modified, and shared. Studying means inspecting behavior, limits, and failure modes. Modifying means adapting models to new domains, languages, or constraints. Sharing means deploying systems without being locked to a single vendor or API.

These freedoms must apply not only to code, but also to models, weights, and the tooling required to run them. Without that access, reuse is brittle and understanding remains shallow.

Open source enables verification, reproducibility, and portability. It allows systems to be audited, adapted, and redeployed independently. In a field defined by cost, scale, and complexity, these are not luxuries. They are prerequisites for agency.

Open access does not eliminate power imbalances. Compute, data, and expertise still matter. But it preserves the possibility of independent action, which is often the difference between participation and dependency.

Open Standards and Shared Infrastructure

Open source alone is not enough. Open standards define shared interfaces that allow independently built systems to work together.

The web proved this model at global scale. By separating interfaces from implementations, standards enabled competition without fragmentation. In AI, standards around model formats, inference interfaces, evaluation, and data documentation can lower switching costs and prevent ecosystems from hardening into silos controlled by a few gatekeepers.

Without standards, “openness” risks collapsing into a collection of incompatible artifacts, each tied to its own platform or service.

Looking Ahead

Some infrastructure works best when treated as a common good. The web’s resilience came from the fact that no single actor owned its foundations.

AI is on track to become similar infrastructure. The question is not whether it will be powerful, but whether it will be governable.

If core models, datasets, and interfaces are only accessible through proprietary APIs and cloud platforms, then “AI adoption” will mostly mean dependency. Choice will be limited to pricing tiers, usage caps, and terms of service.

Not everything should be owned and monetized by a small number of companies. Projects like Mozilla’s Common Voice show that shared assets can be built and maintained in the open, at meaningful scale.

Shared infrastructure also depends on shared spaces. Platforms like Hugging Face play a critical role by enabling collaboration around models, datasets, and tools, and by lowering the barrier to participation in open AI ecosystems.

Open source and open standards are not about nostalgia or ideology. They are about keeping the option to walk away. To inspect. To fork. To rebuild.

Once that option is gone, it is rarely recovered.

References

Mozilla Addons BlogPresenting 2025 Firefox Extension Developer Award Recipients

Extensions have long been at the heart of the Firefox — providing users with powerful options to personalize their browsing experience. Nearly half of all Firefox users have installed at least one extension. These incredible tools and features are built by a community of more than 10,000 developers. While all developers contribute to the depth and diversity of our ecosystem, some of the most popular extensions provide significant global impact.

Today we celebrate our first cohort of notable developers. Below are this year’s recipients of the Firefox Extension Developer Award, presented to developers of some of the most popular Firefox extensions. The bespoke metal trophies were designed by Alper Böler, a California-based industrial designer and artist.

On behalf of Mozilla, and all Firefox users, thank you to all developers for your amazing contributions to the ecosystem!

Platinum

uBlock Origin — Ad blocker with 10M+ users. uBlock Origin has long been one of the most popular extensions for Firefox, providing a massive positive impact for users. This is a well-supported extension maintained by a passionate group of contributors, and we’d like to extend a special thank you to everyone who helps make this an exceptional extension.

(Reflecting astounding recent growth, uBlock Origin averaged 9.5M daily users when the awards were commissioned, which would have made it a Gold Award recipient; however it has since surpassed 10.5M daily users so we’ve elevated uBlock Origin to Platinum status.)

Silver

Ablock Plus — Debuted on Firefox all the way back in 2006.

Video DownloadHelper — Immensely capable media downloader.

Privacy Badger — “Privacy Badger is developed by the Electronic Frontier Foundation, a digital rights nonprofit with a 35-year history of defending online privacy. We created Privacy Badger over a decade ago to fight pervasive, nonconsensual tracking online. In the absence of strong privacy laws, surveillance has become the business model of the internet. Just browsing the web can expose sensitive data to advertisers, Big Tech companies, and data brokers. While we continue advocating for comprehensive privacy legislation, Privacy Badger gives people a quick, easy way to protect themselves. Privacy Badger is both a practical tool for individuals and part of EFF’s broader effort to end online surveillance for everyone.” – Lena Cohen, Staff Technologist at EFF

AdBlocker Ultimate — Also works beautifully on Firefox for Android.

AdGuard AdBlocker — Blocks ads and will also warn you about potentially malicious websites.

Dark Reader — “Working long hours in front of a bright computer screen made my eyes tired. LCD screens can feel like staring into a light bulb. Dark Reader started as a simple screen inverter to give my eyes a break. Over time, it evolved into a much more sophisticated tool, adapting to the growing needs of users.” – Alexander Shutau

AdBlock for Firefox — Arrived to the Firefox ecosystem in 2014.

DuckDuckGo Search & Tracker Protection — “At DuckDuckGo, we want to help people take back control of their personal information — whether that be when they’re making a search, using AI, emailing, or browsing. In 2017, we had a search engine, but we knew we wanted to extend privacy to the browsing experience. At that time we hadn’t built our own browser, so we bundled private search, tracking and fingerprinting protections, and more, into an easy-to-add web extension.” – Sam Macbeth

Bronze

Ghostery — “We wanted to create a truly user-focused ad blocker — one that doesn’t compromise on effectiveness, doesn’t maintain whitelists for advertisers, and gives people back control of their browsing experience. Many tools in the market were tied to ad industry interests, so our goal was to build a 100% independent, transparent solution. Ghostery was one of the first add-ons ever published on the Mozilla platform. Its original motivation was to bring transparency to the web.” – Krzysztof Modras

Return YouTube Dislike — “(I made it) for my own convenience. I wanted to use this feature myself, first and foremost. I think YouTube misses a lot by making dislike counts invisible.” – Dmitry Selivanov

Translate Web Pages — An effectively simple translation tool.

Bitwarden — “Back in 2015-’16, I was frustrated with the existing password management landscape. As a developer and engineer, I saw several problems that needed solving: complicated setup procedures, lack of cross-platform availability, and fragmented open source solutions that were hard to trust. I wanted to create a password manager that would meet the needs of someone like myself — a technologist who valued simplicity, transparency, and accessibility. The browser extension was one of the first components I built and it turned out to be crucial for Bitwarden since it made password management seamless for users across their daily web browsing.” – Kyle Spearrin

To Google Translate — “When I was at university, I started learning English on my own. I used to read articles in English about security and programming, and whenever I didn’t understand a word or was unsure about its pronunciation, I would copy and paste it into Google Translate to learn its meaning and how to say it. Over time, I realized this process was very manual and time-consuming, since I still had a lot of vocabulary to learn. That’s when I thought: ‘Is it possible to automate this to make it easier?’ That insight led me to build an add-on. In short, it started as a personal need, and later I realized that many others shared the same challenge. I never imagined the extension would reach and help so many people.” – Juan Escobar

IDM Integration Module — Companion extension to the popular desktop application.

Tampermonkey — “In 2008 I teamed up with a friend to develop a Greasemonkey userscript that automated parts of an online game. The script eventually grew into a full‑featured Firefox extension. When Chrome was released, I ported the extension to that browser and realized that the insights I gained about the WebExtension APIs could serve as the foundation for a new userscript manager. I later launched that manager, Tampermonkey, in May 2010. Firefox’s switch to WebExtensions in 2015 gave me an opportunity to bring Tampermonkey to Firefox as well.” – Jan Biniok

Grammarly: AI Writing and Grammar Checker — “When we first launched Grammarly, it was exclusively in our Grammarly editor, so users had to write directly into our web editor to get help with their writing. We realized there was so much more value in bringing Grammarly directly to where people write — in their browsers, on the sites they use every day for work and for school, and across 500,000 different apps and websites. Extensions became the natural way to meet people in their existing workflows rather than asking them to change how they already work, and it’s part of what makes Grammarly one of the top AI tools.” – Iryna Shamrai

Cisco Webex Extension — Companion extension for Cisco Webex Meetings or Webex App.

SponsorBlock – Skip Sponsorships on YouTube — “One of my favourite YouTube channels uploaded a video with a sponsor message that was deceptively placed into the video. It really made me frustrated. Then I had the idea that crowdsourcing sponsor timestamps could maybe just work.” – Ajay

ClearURLs — “The idea for the extension actually came up quite spontaneously during a lunch break at university. While studying computer science, a friend and I started talking about how frustrating all those tracking elements in URLs can be. We wondered if there was already a browser add-on that could automatically clean them up, but after some research we realized there really wasn’t anything like that out there.” – Kevin Röbert

The post Presenting 2025 Firefox Extension Developer Award Recipients appeared first on Mozilla Add-ons Community Blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest: November/December 2025

Hello again from the Thunderbird development team as we start to wind down for the holidays! Over the past several weeks, our sprints have been focused on delivery and consolidation to clear our plates for a fresh start in the New Year. 

Following our successful in-person work-week to discuss all things protocol, we’ve brought Exchange support (EWS) to our Monthly release channel, completed much of the final phases of the Account Hub experience, and laid the groundwork for what comes next. Alongside this feature work, the team has spent a significant amount of time adapting to upstream platform changes and supported our Services colleagues as we prepared for wider rollout. It’s been a period of steady progress, prioritization, and planning for the next major milestones.

Exchange Email Support

Since the last update, we’re so happy to finally announce that Exchange support for email has shipped to the Monthly release channel, accompanied by supporting blog posts, documentation and some fanfare. In the weeks leading up to and following that release, the team focused on closing out priority items, addressing stability issues, and ensuring the experience scales well as more users add their EWS-based Exchange accounts.

Work completed during this period includes:

  • NTLM authentication support and related request queueing
  • Fixes for crashes related to DNS resolution after in-depth investigation and collaboration with platform teams
  • Improvements to folder operations such as Empty Trash via EmptyFolder
  • Password-on-send prompting
  • Continued hardening of account setup and message handling paths

In parallel, the team has begun work on Graph API support for email, which is now moving rapidly through its early stages, thanks in large part to the solid foundation laid for EWS. It’s so nice when a plan comes together

This work represents the next major milestone for Exchange support and will inform broader architectural refactoring planned for future phases.

The Exchange team also met in person to plan out upcoming milestones. These sessions allowed us to break down future work and begin early research and prototyping for:

  • Graph API-based email support
  • Architectural refactoring
  • Copy and move operations
  • Incoming and outgoing configuration improvements
  • Longer-term work on Graph API Calendar and Address Book integration

Keep track of our Graph API implementation here. 

Account Hub

A major focus during this period was completing the Email Account Hub Phase 3 milestone, with the final bugs landing and remaining items either completed or moved into maintenance. This work was prioritized to improve the experience for users setting up new accounts, particularly Exchange accounts.

Notable improvements and fixes include:

  • Increased robustness of the detection and setup flow
  • Improvements to error handling and recovery during account setup
  • Continued work on the manual configuration flow, developed in close collaboration with the Design team
  • Uplifts to ensure key fixes reached Beta and Monthly releases
  • Addition of telemetry to help us understand potential UX problems and improvements

With the primary Phase 3 goals now complete, the team has been able to shift attention back to other front-end initiatives while continuing to refine the Account Hub experience through targeted fixes and polish.

Follow progress in the meta bugs for phase 3 and telemetry

Calendar UI Rebuild

Calendar UI work progressed more slowly during this period due to competing priorities (hiring!), in-person meetups and planned time off, but planning and groundwork continued and development back underway. The team:

  • Restarted sprint planning for upcoming milestones
  • Assigned tasks and estimated work for the next phase
  • Continued preparation for adopting Redux-based state management, recently vendored into the codebase
  • With Account Hub milestones now largely wrapped up, Calendar UI work is ramping back up as we move into the next development cycle.

Stay tuned to our milestones here:

Maintenance, Upstream adaptations, Recent Features and Fixes

Throughout this period, the team also spent a considerable amount of time responding to upstream changes that affected build stability, tests, and CI. Sheriffing remained challenging, with frequent tree breakages requiring investigation to distinguish upstream regressions from local changes. In addition to these items, we’ve been blessed with help from the larger development community to deliver a variety of improvements over the past two months. 

A very special shout out to a new contributor who worked with our senior team to solve a 19-year old problem relating to unread folders. Interactions like this are fuel for our team and we’re incredibly grateful for the help.

If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: November/December 2025 appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 630

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is logos, a modern lexer generator.

Thanks to Sam O'Brien for the (partial self-)suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
  • RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

482 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

This week we saw several regressions, partly from the compiler doing more work. The remaining regressions are being investigated.

Triage done by @kobzol. Revision range: 55495234..21ff67df

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 5.1%] 40
Regressions ❌
(secondary)
0.8% [0.0%, 3.0%] 63
Improvements ✅
(primary)
-0.7% [-1.5%, -0.1%] 35
Improvements ✅
(secondary)
-1.0% [-7.4%, -0.0%] 73
All ❌✅ (primary) -0.1% [-1.5%, 5.1%] 75

3 Regressions, 2 Improvements, 5 Mixed; 2 of them in rollups 36 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * Adding a crates.io Security tab

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Rust RFCs Cargo Leadership Council

No Items entered Final Comment Period this week for Compiler Team (MCPs only), Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-17 - 2026-01-14 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I allow my code to be used for training AI on GitHub. Not because I fear AI taking our jobs—but because I’m confident my code will slow it down enough to save us all.

王翼翔 on rust-users

Thanks to Moy2010 for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Tarek Ziadérustnn - a Python and Rust Implementation of W3C WebNN aimed at Firefox

Over the past few weeks, I’ve been working on rustnn, a Rust implementation of the W3C WebNN specification.

What started as an experiment to gain a deeper understanding of WebNN quickly grew into something more substantial: a working implementation that is now very close to being a usable library.

I began this project after returning from TPAC, convinced that WebNN is the future of AI in the browser, and that Firefox needs to catch up with the work that has already been done in Chromium.

We are likely still months away from matching the level of maturity Chromium has achieved over several years of development. However, in just a few weeks I was able to make significant progress thanks to a few key factors:

  • The WebNN specification is clear and well written
  • The WPT conformance and validation tests are comprehensive
  • End-to-end JavaScript demos exercise WebNN in realistic scenarios
  • Chromium’s implementation has already surfaced many of the hard problems

Claude Code

All of these factors made it surprisingly easy to build the library quickly using Claude Code. Once I had enumerated the 95 operators that needed to be implemented, the workflow for each one was essentially the same:

  • use the specification to understand the operator
  • grab the relevant WPT tests
  • implement the operator in the CoreML and ONNX converters
  • validate it against the ONNX and CoreML executors
  • move on to the next operator

Claude consistently performed well. I was able to build a library that would normally have taken me months to write on my own. When something failed, narrowing down the problem was straightforward by iterating between the specification and the tests.

Because most of the work revolves around graph conversion and orchestrating existing inference libraries, the code generated by Claude is generally clean and easy to reason about.

The Chromium implementation was also a huge help when I started to get into weird corner cases, especially around CoreML. That code base has been developed over the years with people directly involved in the spec.

I have started adding performance tests, and there will likely be some manual follow-up work, but reaching a functional implementation so quickly is already a major milestone.

Why Rust?

These days, adding a new API to Firefox usually means creating a Rust library that is vendored into the tree and bound to Gecko using cbindgen, unless there is an existing C++ library that already fits the need.

This gradual “oxidation” of Firefox started years ago, and major features such as WebGPU have followed this model. Gecko is still a large C++ codebase, and integrating a Rust library is not trivial, but implementing something like WebNN outside the browser engine has a major advantage: it allows a much broader community to contribute. We are already seeing the benefits of this approach with wgpu.

I am not going to rehash the Rust vs. C++ debate. There is no shortage of material on why Rust has become an attractive choice for systems programming.

My first instinct was to see whether Chromium’s WebNN implementation could be reused. In practice, that turned out to be impractical. The code is deeply intertwined with Blink and its IPC layers, making it very difficult to extract reusable components in a clean way.

We also evaluated webnn-native, a C++ implementation developed within the Web Machine Learning community. While promising, the project had been effectively stalled for about two years and lacked support for the most recent inference backends. Extending it was an option, but it quickly became clear that a fresh Rust implementation would be both faster to iterate on and a better architectural fit for Gecko.

In the end, this is good news for the Web and for WebNN. An independent implementation helps validate the specification, exposes ambiguities earlier, and ultimately makes the standard stronger.

Finally, building the core in Rust makes it trivial to expose a Python API on top of it, which opens the door to experimentation and adoption by the broader ML community.

The architecture

rustnn follows a key principle: graph compilation creates a platform-independent representation; backend conversion happens at execution time.

This follows the same logic as Chromium and is a great way to make sure we can add more backends in the future.

flowchart TD
  RustNN["RustNN"]

  WebNNGraph["WebNN Graph"]
  Executors["Executors"]

  Converter["Converter"]

  ONNXRuntime["onnx runtime"]
  CoreMLExec["CoreML"]
  TensorRT["TensorRT"]

  ONNXGraph["ONNX Graph"]
  CoreMLGraph["CoreML Graph"]

  RustNN --> WebNNGraph
  RustNN --> Executors

  WebNNGraph --> Converter
  Converter --> ONNXGraph
  Converter --> CoreMLGraph

  Executors --> ONNXRuntime
  Executors --> CoreMLExec
  Executors --> TensorRT

In the library, we do an initial pass on the WebNN graph to produce an intermediate representation, then we pick a converter to turn that graph into another graph that can run with an AI library. And there are executors that can run the graph using those external libraries.

This is a very powerful design. For instance, I am playing with the TensorRT-RTX library that can be used to efficiently run AI on NVIDIA GPUs and that library has full support for ONNX graphs. This means we can run networks in rustnn using the ONNX converter combined with the TensorRT executor.

CoreML, ONNX and TensorRT

I picked CoreML and ONNX as my first target runtimes because I work on a MacBook, and because they are both implemented in Chromium.

Chromium uses ONNX on Windows because that library now ships with the latest Windows 11, and it falls back to DirectML. It also has a CoreML implementation on macOS.

So I went ahead and built both CoreML and ONNX as converters and executors until I could make the image classification demo work with the Python binding.

Next, I started to add TensorRT as an executor for Windows with NVIDIA GPUs. That one is a work in progress because I have to work on another Windows computer and I am slower in that environment. But it’s technically already working. I started the trtx-rs Rust library to bind TensorRT, since the existing Rust binding was 5 years old.

PyWebNN

rustnn exposes a Python binding (PyWebNN) that implements the W3C WebNN API on top of the Rust core. You can use it for graph validation, conversion (ONNX/CoreML) and execution of neural networks.

Installation:

# Install from PyPI with bundled ONNX Runtime (v0.4.0+)
pip install pywebnn

# Or build from source with all backends (ONNX + CoreML)
git clone https://github.com/tarekziade/rustnn.git
cd rustnn
make python-dev
source .venv-webnn/bin/activate

Version 0.4.0+ includes bundled ONNX Runtime for immediate execution support. No additional dependencies needed!

This is a very small example adapted from the repo example examples/python_matmul.py.

It shows the minimal flow:

  1. create an ML instance and context
  2. create a graph builder
  3. define two constant tensors
  4. build a matmul node
  5. compile the graph
  6. run it

Note: Use accelerated=False for CPU-only execution, or accelerated=True with power_preference="high-performance" for GPU acceleration.

# PyWebNN — tiny matmul example
import numpy as np
import webnn

# 1) create ML instance and context (CPU execution here)
ml = webnn.ML()
ctx = ml.create_context(power_preference="default", accelerated=False)

# 2) build a simple graph: Y = A @ B
builder = ctx.create_graph_builder()
A = np.array([[1., 2.], [3., 4.]], dtype=np.float32)
B = np.array([[5., 6.], [7., 8.]], dtype=np.float32)

a = builder.constant(A)    # constant input A
b = builder.constant(B)    # constant input B
y = builder.matmul(a, b)   # matmul node

graph = builder.build({"output": y})  # compile the graph

# 3) run the graph and print result
result = ctx.compute(graph, {})  # returns dict of outputs
print("Y =", result["output"])

What happens:

  • ML() creates the entry point following the W3C WebNN spec
  • create_context() creates a runtime context (choose CPU/GPU/NPU where supported)
  • create_graph_builder() constructs the WebNN graph using familiar ops (constant, matmul, etc.)
  • build() compiles the graph with named outputs (dict format)
  • compute() runs it and returns the outputs as a dict

Firefox

Paul Adenot is currently extending the Firefox AI Runtime platform to add a new specialized process to run against GPUs for the WebSpeech implementation, and the WebNN API will use it when it’s added in the browser.

In the meantime I have built a patch that adds the WebNN JS API in Firefox and executes it directly in the content process, which is a big security hole.

But it was a good way to start figuring out all the pieces, in particular how to bind the Rust library into the C++ layer using cbindgen, and how to create the WebIDL interface to provide the JS API.

The current series of patches is just a proof of concept, but I already have a fully functional demo of all basic operators, and a clone of the WebNN JS MobileNetV2 image classifier demo — see the video.

The WebNN implementation spans six distinct layers:

  1. JavaScript API Layer — Web-facing API (navigator.ml, MLContext, MLGraphBuilder, MLGraph, MLOperand, MLTensor)
  2. WebIDL Layer — Interface definition language defining the JavaScript API surface
  3. C++ DOM Implementation — Core implementation in dom/webnn/
  4. Rust FFI Bridge — Foreign Function Interface in dom/webnn/rustnn_bridge/
  5. rustnn Library — Rust implementation in third_party/rust/rustnn/
  6. Backend — Platform-specific backend (ONNX Runtime, CoreML, etc.) for neural network execution with hardware acceleration

… and has the following flow:

Graph Building Phase:

  • Web content calls navigator.ml.createContext()
  • C++ creates backend context via Rust FFI (ONNX Runtime or CoreML depending on platform)
  • Web content creates MLGraphBuilder and defines operations
  • Each operation creates an MLOperand representing the result
  • Web content calls builder.build() with output operands
  • C++ serializes operations to JSON and calls Rust FFI
  • Rustnn converts the graph to backend-specific format (ONNX or CoreML)
  • Backend creates an optimized execution session
  • Graph ID is returned to web content as MLGraph
sequenceDiagram
  participant JS as "JavaScript"
  participant CPP as "C++ (MLGraphBuilder)"
  participant FFI as "Rust FFI Bridge"
  participant RUST as "Rustnn Library"

  JS->>CPP: createContext()
  CPP->>FFI: rustnn_context_create()
  FFI->>RUST: Context::new()
  RUST-->>FFI: Context handle
  FFI-->>CPP: context_id
  CPP-->>JS: MLContext

  JS->>CPP: new MLGraphBuilder(context)
  CPP-->>JS: MLGraphBuilder

  JS->>CPP: input("x", shape, dataType)
  CPP-->>JS: MLOperand

  JS->>CPP: add(a, b)
  CPP-->>JS: MLOperand

  JS->>CPP: build({ output: operand })
  CPP->>FFI: rustnn_graph_build(ops_json)
  FFI->>RUST: GraphBuilder::build()

  Note right of RUST: Convert to backend format
  Note right of RUST: Create backend session

  RUST-->>FFI: Graph handle
  FFI-->>CPP: graph_id
  CPP-->>JS: MLGraph

Inference Phase:

  • Web content calls context.compute(graph, inputs, outputs)
  • C++ marshals input data and calls Rust FFI with graph ID
  • Rustnn retrieves the backend session and prepares input tensors
  • Backend (ONNX Runtime or CoreML) executes the computational graph
  • Hardware acceleration is automatically utilized when available
  • Output tensors are returned through Rust FFI
  • C++ copies output data to JavaScript-provided buffers
  • Promise resolves, indicating inference completion
sequenceDiagram
  participant JS as "JavaScript"
  participant CPP as "C++ (MLContext)"
  participant FFI as "Rust FFI Bridge"
  participant RUST as "Rustnn Library"
  participant BE as "Backend (ONNX/CoreML)"

  JS->>CPP: compute(graph, inputs, outputs)
  CPP->>FFI: rustnn_graph_compute(graph_id, inputs, outputs)
  FFI->>RUST: Graph::compute()
  RUST->>BE: session.run()

  Note right of BE: Execute operations

  BE-->>RUST: Output tensors
  RUST-->>FFI: Results
  FFI-->>CPP: Output data
  CPP-->>JS: Promise resolves

Again, this is not the final design since we need to run inference in a separate process and have an IPC layer between the C++ code and the Rust bridge.

Conclusion

rustnn started as a way for me to really understand WebNN, but it quickly turned into a convincing proof that the specification is solid, implementable, and ready to grow beyond a single browser engine. Having an independent implementation is healthy for the Web, and rustnn shows that WebNN can be built as a reusable, backend-agnostic library rather than something deeply tied to a single browser architecture.

This project is also my first substantial experience with Claude Code, and it fundamentally changed the pace at which I could work. Implementing nearly a hundred operators, wiring multiple backends, and validating everything against WPT would normally be a multi-month effort. With a strong spec, good tests, and a capable AI agent, it became an iterative and surprisingly enjoyable process. The result is not throwaway code, but a clean foundation that can be extended, optimized, and reviewed by others.

I am very optimistic about WebNN’s future in Firefox and on the Web in general. With rustnn and pywebnn, my hope is to make it easier for browser engineers, ML practitioners, and researchers to experiment, contribute, and push the ecosystem forward. There is still a lot to do, especially around performance, security, and process isolation, but the path forward is now much clearer.

Resources

Mozilla Attack & DefenseAttempting Cross Translation Unit Taint Analysis for Firefox

Preface

Browser security is a cutting edge frontier for exploit mitigations, addressing bug classes holistically, and identifying vulnerabilities. Not everything we try works, and we think it’s important to document our shortcomings in addition to our successes. A responsible project uses all available tools to find bugs and vulnerabilities before you ship. Besides many other tools and techniques, Firefox uses Clang Tidy and the Clang Static Analyzer, including many customized checks for enforcing the coding conventions in the project. To extend these tools, Mozilla contacted Balázs, as one of the maintainers of the Clang Static Analyzer, to help address problems encountered when exploring Cross Translation Unit (CTU) Static Analysis. Ultimately, we weren’t able to make as much headway with this project as we hoped, but we wanted to contribute our experience to the community and hopefully inspire future work. Be warned, this is a highly technical blog post.

The following sections describe some fundamental concepts, such as taint analysis, CTU, the Clang Static Analyzer engine. This will be followed by the problem statement and the solution. Finally, some closing words.

Disclaimer: The work described here was sponsored by Mozilla.

Static Analysis Fundamentals

Taint analysis

Vulnerabilities often root from using untrusted data in some way. Data from such sources is called “tainted” in static analysis, and “taint analysis” is the technique that tracks how such “tainted” values propagate or “flow” throughout the program.

In short, “Taint sources” introduce a flow, such as reading from a socket. If a tainted value reaches a “taint sink” then we should report an error. These “sources” and “sinks” are often configurable.

A YAML configuration file can be used with the Clang Static Analyzer configuring the taint rules.

Cross Translation Unit (CTU) analysis

The steps involved in bugs or vulnerabilities might cross file boundaries. Conventional static analysis tools that operate on a translation-unit basis would not find the issue. Luckily, the Clang Static Analyzer offers CTU mode that loads the relevant pieces of the required translation units to enhance the contextual view of the analysis target, thus increasing the covered execution paths. Running CTU needs a bit of setup, but luckily tools like scan-build or CodeChecker have built-in support.

Path-sensitive analysis

The Clang Static Analyzer implements a path-sensitive symbolic execution. Here is an excellent talk but let us give a refresher here.

Basically, it interprets the abstract syntax tree (AST) of the analyzed C/C++ program and builds up program facts statement by statement as it simulates different execution paths of the program. If it sees an if statement, it splits into two execution paths: one where the condition is assumed to be false, and another one where it’s assumed to be true. Loops are handled slightly differently, but that’s not the point of this post today.

When the engine sees a function call, it will jump to the definition of the callee (if available) and continue the analysis there with the arguments we had at the call-site. We call this “inlining” in the Clang Static Analyzer. This makes this engine inter-procedural, in other words, reason across functions. Of course, this only works if it knows the callee. This means that without knowing the pointee of a function pointer or the dynamic type of a polymorphic object (that has virtual functions), it cannot “inline” the callee, which in turn means that the engine must conservatively relax the program facts it gathered so far because they might be changed by the callee.

For example, if we have some allocated memory, and we pass that pointer to such a function, then the engine must assume that the pointer was potentially released, and not raise leak warnings after this point.

The conclusion here is that following the control-flow is critical, and virtual functions limit our ability to reason about this if we don’t know the dynamic type of objects.

So, taint analysis for Firefox?

Firefox has a lot of virtual functions!

We discussed that control-flow is critical for taint analysis, and virtual functions ruin the control-flow. A browser has almost every code pattern you can imagine, and it so happens that many of the motivating use cases for this analysis involve virtual functions that also happen to cross file boundaries.

Once upon a time…

It all started by Tom creating a couple of GitHub issues, like #114270 (which prompted a couple smaller fixes that are not the subject of this port), and #62663.

This latter one was blocked by not being able to follow the callees of virtual functions, kicking off this whole subject and the prototype.

Plotting against virtual functions

The idea

Let’s just look at the AST and build the inheritance graph. After that, if we see a virtual call to data(), we could check who overrides this method.

Let’s say only class A and B overrides this method in the translation unit. This means we could split the path into 2 and assume that on one path we call A::data() and on the other one B::data().

// class A... class B deriving from Base
void func(Base *p) {
  p->data(); // ‘p’ might point to an object A or B here.
}

This looks nice and simple, and the core of the idea is solid. However, there are a couple of problems:

  1. One translation unit (TU) might define a class Derived, overriding data(), and then pass a Base pointer to the other translation unit. And when that TU is analyzed, it shouldn’t be sure that only class A and B overrides data() just because it didn’t see Derived from the other TU. This is the problem with inheritance, which is an “open-set” relation. One cannot be sure to see the whole inheritance graph all at once.

  2. It’s not only that Derived might be in a different TU, but it might be in a 3rd party library, and dynamically loaded at runtime. In this case, assuming a finite set of callees for a virtual function would be wrong.

Refining the idea

Fixing problem (2) is easy, as we should just assume that the list of potential callees always has an extra unknown callee, to have an execution path where the call is conservatively evaluated and do the invalidations - just like before.

Fixing problem (1) is more challenging because we need whole-program analysis. We need to create the inheritance graphs of each TU and then merge them into a unified graph. Once we’ve built that, we can run the Clang Static Analyzer and start reasoning about the overriders of virtual functions in the whole project. Consequently, in the example we discussed before, we would know that class A, B and (crucially) Derived overrides data(). So after the call, we would have 4 execution paths: A, B, Derived and the last path is for the unknown case (like potentially dynamically loading some library that overrides this method).

It sounds great, but does it work?

It does! The analysis gives a list of the potential overriders of a virtual function. The Clang Static Analyzer was modified to do the path splits we discussed and remember the dynamic type constraints we learn on the way. There is one catch though.

Some taint flows cross file boundaries, and the Clang Static Analyzer has CTU to counter this, right?

CTU uses the “ASTImporter”, which is known to have infinite recursion, crashes and incomplete implementation in terms of what constructs it can import. There are plenty of examples, but one we encountered was #123093.

Usually fixing one of these is time consuming and needs a deep understanding of the ASTImporter. And even if you fix one of these, there will be plenty of others to follow.

This patch for “devirtualizing” virtual function calls didn’t really help with the reliability of the ASTImporter. As the interesting taint flows cross file boundaries, the benefits of this new feature are unfortunately limited by the ASTImporter for Firefox.

Is it available in the Clang Static Analyzer already?

Unfortunately no, and as the contract was over, it is unlikely that these patches would merge upstream without others splitting up the patches and doing the labor of proposing them upstream. Note that this whole program analysis is a brand new feature and it was just a quick prototype to check the viability.

Upstreaming would likely also need some wider consensus about the design.

Apparently, whole-project analyses could be important for other domains besides bug-finding, such as code rewriting tools, which was the motivation for a recently posted RFC. The proposed framework in that RFC could potentially also work for the use-case described in this blog post, but it’s important to highlight that this prototype was built before that RFC and framework, consequently it’s not using that.

Balázs shared that working on the prototype was really motivating at first, but as he started to hit the bugs in the ASTImporter - effectively blocking the prototype - development slowed down. All in all, the prototype proved that using project-level information, such as “overriders”, could enable better control-flow modeling, but CTU analysis as we have it in Clang today will show its weaknesses when trying to resolve those calls. Without resolving these virtual calls, we can’t track taint flows across file boundaries in the Clang Static Analyzer.

What does this mean for Firefox?

Not much, unfortunately. If the ASTImporter would work as expected, then finalizing the prototype would meaningfully improve taint analysis on code using virtual functions.

You can find the source code at Balázs’ GitHub repo as steakhal/llvm-project/devirtualize-for-each-overrider, which served well for exploring and rapid prototyping but it is far from production quality.

Bonus: We need to talk about the ASTImporter

From the cases Balázs looked at, it seems like qualified names, such as std in std::unique_ptr for example, will trigger the import of a std DeclContext, which in turn triggers the import of all the declarations within that lexical declaration context. In other words, we start importing a lot more things than strictly necessary to make the std:: qualification work. This in turn increases the chances of hitting something that causes a crash or just fails to import, poisoning the original AST we wanted to import into. This is likely not how it should work, and might be a good subject to discuss in the future.

Note that the ASTImporter can be configured to do so-called “minimal imports” which is probably what we should have for the Clang Static Analyzer, however, this is not set, and setting it would lead to even more crashes. Balázs didn’t investigate this further, but it might be something to explore in the future.

Tarek ZiadéAll I Want for Christmas is a Better Alt Text – Part 2

In Part 1, I explained why high-quality alt text matters, how modern vision–language models can help, and why balanced, carefully curated datasets are essential for training.

In this second part, I focus on architecture. I explain why I decided to move away from my initial design, what I learned from that first implementation, and why I ultimately settled on a prefix-conditioning + LoRA approach.

This choice is driven by practical constraints. For alt-text generation, the goal is not exhaustive visual understanding, but a short, reliable sentence that conveys the essence of an image to visually impaired users. Within that scope, prefix conditioning offers a much simpler model that is easier to train, easier to deploy, and better aligned with accessibility requirements.

More broadly, the PDF.js alt-text project aims to explore how far we can push small, efficient vision–language models for accessibility use cases. Rather than optimizing for peak benchmark scores, the focus is on reliability, fast iteration cycles, limited compute, and deployable models.

DistilViT is intentionally constrained. Smaller models, fewer trainable parameters, and simpler architectures make it possible to experiment rapidly, control bias more carefully through dataset curation, and realistically target on-device or near-device inference scenarios.

What I started with: a classic encoder–decoder model

My first implementation relied on Hugging Face’s VisionEncoderDecoderModel. Concretely, it paired:

  • a ViT-based vision encoder, and
  • a GPT-2–style decoder (distilgpt2),

trained end-to-end using Seq2SeqTrainer.

Conceptually, the architecture looked like this:

flowchart TD
    A[Image] --> B["Vision Encoder (ViT)"]
    B --> C[Encoder hidden states]
    C -->|Cross-attention| D["Decoder (GPT-2)"]
    D --> E[Caption]

This worked. GPT-2 generated captions, and the system was usable. I was inspired by The Illustrated Image Captioning Using Transformers, followed that recipe, and reduced the decoder size by using a distilled version of GPT-2.

What I did not fully appreciate at the time was what choosing GPT-2 implied under the hood.

Unlike T5 or BART, GPT-2 is a decoder-only language model. In its original architecture, it does not support cross-attention or encoder hidden states.

So why did this setup work?

Because VisionEncoderDecoderModel.from_encoder_decoder_pretrained() wraps GPT-2 and injects cross-attention layers. This effectively converts GPT-2 into a seq2seq-style decoder by adding encoder–decoder attention blocks and routing the vision encoder outputs through them.

That distinction matters. These cross-attention layers are initialized from scratch, require substantial training signal, and introduce additional state to manage at inference time. Exporting the model and handling caching also become more complex.

This approach is valid, but it turned out to be architecturally heavier than expected for the scale and goals of this project. Training was slower, GPU memory usage was higher, and deployment friction increased.

Models like T5 or BART avoid this injection step because they already contain pretrained cross-attention blocks. However, those blocks were trained to attend to text encoder states and still require fine-tuning to adapt properly to vision features.

At that point, I started looking for an alternative and came across prefix conditioning.

Cross-attention vs prefix conditioning

It is worth stepping back and comparing these two approaches without framing one as universally superior.

Cross-attention gives the decoder continuous access to visual features at every generation step. This is extremely powerful for tasks that require fine-grained spatial grounding, OCR, counting, or reasoning over multiple regions in an image.

Prefix conditioning, by contrast, injects visual information once, as a sequence of projected vision tokens prepended to the text embeddings at the beginning of the text. After that, the model relies entirely on standard self-attention.

This leads to clear trade-offs:

  • Cross-attention provides stronger and more precise grounding.
  • Prefix conditioning trades some of that precision for architectural simplicity.

For my use case, this trade-off is appropriate. The goal of alt text here is not to enumerate details or perform spatial reasoning, but to produce a single, concise sentence that conveys the overall content of an image to visually impaired users. Captions are short, factual, and descriptive, and they primarily require global visual context rather than continuous visual querying.

Under these conditions, prefix conditioning is often sufficient, while being far easier to train, debug, and deploy than a full encoder–decoder setup.

Prefix conditioning with LoRA

The architecture I use now looks like this:

flowchart TD
    A[Image] --> B["SigLIP Vision Encoder (frozen)"]
    B --> C["Projection Head (Linear / MLP)"]
    C --> D[Vision embeddings as prefix tokens]
    D --> E[Decoder-only LM with LoRA]
    E --> F["Caption (25–30 tokens)"]

Instead of asking the decoder to attend to an encoder, I inject the visual information directly into the decoder’s input space as prefix tokens.

  • No cross-attention
  • No encoder–decoder coupling
  • Just conditioning

The language model only needs standard causal self-attention. Any decoder-only LLM works out of the box, without architectural changes or special forward signatures.

This restores flexibility. I can swap language models freely without touching the vision side.

I apply LoRA adapters to the language model’s attention projection matrices.

  • The base language model remains frozen
  • The vision encoder remains frozen
  • Only the projection head and LoRA adapters are trained

In practice, this means:

  • ~221M total parameters
  • ~2.2M trainable parameters
  • Roughly 1 percent of the model updated

Training is faster, more stable, and far less memory-intensive. The risk of overfitting drops significantly when working with small datasets.

Deployment also becomes simpler.

  • The vision encoder exports cleanly to ONNX
  • The projection head is trivial
  • The decoder is a standard causal LM with past key values

There is no cross-attention graph, no encoder cache plumbing, and no exotic export logic. ONNX Runtime becomes a realistic target instead of a constant source of friction.

Summary

Cross-attention remains a powerful and sometimes necessary tool. For this project, however, it added complexity without delivering better alt text.

Prefix conditioning gives me:

  • a simpler architecture
  • faster iteration
  • better tooling compatibility
  • easier deployment
  • freedom to use modern decoder-only models

Initial experiments show that the new architecture produces alt text of comparable quality to the previous one, with only a 1 to 2 percent CLIP score difference when trained on the same datasets. The key difference is training speed, which is roughly five times faster.

Next, my goal is to surpass DistilViT’s current quality by improving the training dataset, while keeping an architecture that is simple, fast to train, and flexible enough to accommodate future decoder models.

References

Useful Links

The Rust Programming Language BlogProject goals update — November 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"

Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

1 detailed update available.

Comment by @frank-king posted on 2025-11-21:

Status update:

Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

TL;DR.
  • We have made lot's of progress with the novel place-based proposal made by @Nadrieril. Since the last update, he released his idea as a blog post and have had an immense amount of discussions on Zulip. There are still many open questions and problems left to solve. If you have any ideas, feel free to share them on Zulip.

  • At the beginning of this month, we explored moving projections and &own. We also looked into reducing the number of projection traits.

  • The PR https://github.com/rust-lang/rust/pull/146307 has been stale for this month, but will be picked up again in December.

3 detailed updates available.

Comment by @BennoLossin posted on 2025-11-01:

Moving Projections and &own

Moving projections are a third kind of projection that already exists in Rust today for Box as well as any local variable holding a struct. While we won't be including it in an MVP, we still want to make sure that we can extend the language with moving projections. Here is an example with Box:

fn destructure_box(mut b: Box<Struct>) -> Box<Struct> {
    let f1 = b.f1;
    b.f1 = F1::new();
    b
}

This projection moves the field out of the box, invalidating it in the process. To make it valid again, a new value has to be moved in for that field. Alternatively, the partially valid box can be dropped, this will drop all other fields of Struct and then deallocate the Box. Note that this last property is implemented by compiler magic today and moving projections would allow this special behavior for Box to be a library implementation instead.

To make this kind of projection available for all types, we can make it a proper operation by adding this trait:

pub unsafe trait ProjectMove: Projectable {
    type OutputMove<'a, F: Field<Base = Self::Target>>;
    
    unsafe fn project_move<'a, F: Field<Base = Self::Target>>(
        this: *mut Self,
    ) -> Self::OutputMove<'a, F>;
    
    unsafe fn drop_husk(husk: *mut Self);
}

Importantly, we also need a drop_husk function which is responsible for cleaning up the "husk" that remains when all fields have been move-projected. In the case of Box, it deallocates the memory. So for Box we could implement this trait like this:

impl<T> ProjectMove for Box<T> {
    type OutputMove<'a, F: Field<Base = T>> = F::Type;

    unsafe fn project_move<'a, F: Field<Base = T>>(
        this: *mut Self,
    ) -> F::Type {
        let ptr = unsafe { (*this).0.pointer.as_ptr() };
        ptr::read(unsafe {
            <*const T as Project>::project::<'a, F>(&raw const ptr)
        })
    }

    unsafe fn drop_husk(husk: *mut Self) {
        // this is exactly the code run by `Box::drop` today, as the compiler
        // drops the `T` before `Box::drop` is run.
        let ptr = (*husk).0;
        unsafe {
            let layout = Layout::for_value_raw(ptr.as_ptr());
            if layout.size() != 0 {
                (*husk).1.deallocate(From::from(ptr.cast()), layout);
            }
        }
    }
}

To support moving back into a value we have two options:

  1. Add a ProjectMoveBack trait that declares an operation which accepts a value that is moved back into the projected one, or
  2. Add &own references.

Until now, we have explored the second option, because there are lot's of other applications for &own.

&own References

A small interlude on &own references.

An &'a own T is a special kind of exclusive reference that owns the value it points to. This means that if you drop an &own T, you also drop the pointee. You can obtain an &own T by constructing it directly to local variable &own my_local or by deriving it from an existing &own via field projections. Smart pointers generally also allow creating &own T from &own SmartPtr<T>.

One important difference to &mut T is that &own is not only temporally unique (i.e. there are no other references to that value not derived from it) but also unique for that value. In other words, one can create at most one &own T to a local variable.

let mut val = Struct { ... };
let x = &own val; //~ HELP: ownership transferred here
drop(x);
let y = &own val; //~ ERROR: cannot own `val` twice

Since the drop(x) statement drops val, the borrow checker must disallow any future access. However, we are allowed to move a value back into the memory of val:

let mut val = Struct { ... };
let x = &own val;
drop(x);
val = Struct { ... };
let y = &own val;

The lifetime 'a in &'a own T is that of the backing memory. It means that when 'a expires, the memory also is no longer valid (or rather it cannot be proven that it is valid after 'a). For this reason an &'a own T has to be dropped (or forgotten) before 'a expires (since after that it cannot be dropped any more).

&own T itself supports moving projections (another indicator that having them is a good idea). However only for types that don't implement Drop (similar to normal struct destructuring -- there are also talks about lifting this requirement, but no new issues arise from projecting &own).

&own and pinning

To make &pin own T with !(T: Unpin) sound in the face of panics, we have to add drop flags or have unforgettable types. We explored a design using drop flags below; there are separate efforts to experimenting with a Leak/Forget trait ongoing, I think it might be a better solution than drop flags at least for &own.

We need drop flags to ensure the drop guarantee of pinned values. The drop flag will be stored when the original &own is created and it will live on the stack of the function that created it. They are needed for the following scenario:

fn foo() {
    let x = Struct { ... };
    bar(&pin own x);
}

fn bar(x: &pin own Struct) {
    if random() {
        std::mem::forget(x);
    }
    if random() {
        panic!()
    }
}

Since x is pinned on the stack, it needs to be dropped before foo returns (even if it unwinds). When bar forgets the owned reference, the destructor is not run, if it now panics, the destructor needs to be run in foo. But since it gave away ownership of x to bar, it is possible that bar already dropped x (this is the case when the first random() call returns false). To keep track of this, we need a drop flag in the stack frame of foo that gets set to true when x is dropped.

There are several issues with drop flags:

  • we can't have &'static own T pointing to non-static values (for example coming from a Box::leak_owned function).
  • field projections complicate things: if we project to a field, then we could possibly forget one field, but drop another
    • solution: just store drop flags not only for the whole struct, but also all transitive fields that implement Drop
  • there is different behavior between &own T and &pin own T, the former can be forgotten and the destructor will not run, the latter can also be forgotten, but the destructor runs regardless.

This last point convinces me that we actually want &pin own T: !Leak when T: !Leak; but IIUC, that wouldn't prevent the following code from working:

fn main() { 
    let x = Struct { ... };
    let x = &pin own x;
    Box::leak(Box::new(x));
}
DerefMove

The DerefMove operation & trait is something that has been discussed in the past (I haven't dug up any discussions on it though). It is the analogous operation of &own to Deref. We need to figure out the hierarchy wrt. Deref and DerefMut, but ignoring that issue for the moment, here is how DerefMove would look like:

trait DerefMove: DropHusk {
    trait Target: ?Sized;

    fn deref_move(&own self) -> &own Self::Target;
}

Note the super trait requirement DropHusk -- it provides a special drop operation for Self when the &own Self::Target reference has been dropped. Box<T> for example would deallocate the backing memory via DropHusk. Its definition looks like this:

pub unsafe trait DropHusk {
    unsafe fn drop_husk(husk: *mut Self);
}

We would of course also use this trait for ProjectMove. Implementing DropHusk on its own does nothing; implementing DerefMove or ProjectMove will make the compiler call drop_husk instead of Drop::drop when the value goes out of scope after it has been projected or DerefMove::deref_move has been called.

We observed that DerefMove is a lot more restrictive in its usability than Deref--- and we need projections to make it actually useful in the common case. The reason for this is that &own can only be created once, but one would like to be able to create it once per field (which is exactly what moving projections allow). Consider this example:

let b = Box::new(Struct { ... });
let field1 = &own b.field1; // desugars to `DerefMove::deref_move`
let field2 = &own b.field2; //~ ERROR: cannot own `b` twice

The "cannot own `b` twice error comes from the way the deref desugaring works:

let b = Box::new(Struct { ... });
let field1 = &own DerefMove::deref_move(&own b).f1;
let field2 = &own DerefMove::deref_move(&own b).f2;
//                                       ^^^ ERROR: cannot own `b` twice

Now it's clear that we're trying to create two &own to the same value and that can't work (the issue also arises for &mut, but that already is covered by ProjectExclusive).

We can write this instead:

let b = Box::new(Struct { ... });
let b = &own b;
let field1 = &own b.field1;
let field2 = &own b.field2;

But that's cumbersome.

We also note that ProjectMove is the correct projection for ArcRef, as it avoids any additional refcount updates. We can rely on the ergonomic refcounting proposal to provide ergonomic ways to clone the value & perform more projections.

Comment by @BennoLossin posted on 2025-11-02:

Having a single Project trait

The definition of the now 3 Project* traits are 100% verbatim the same (modulo renaming of course), so we spent some time trying to unify them into a single trait. While we cannot get rid of having to have three traits, we can merge them into a single one by adding a generic:

#[sealed]
pub trait ProjectKind {
    type Ptr<T: ?Sized>;
}

pub enum Shared {}
pub enum Exclusive {}

impl ProjectKind for Shared {
    type Ptr<T: ?Sized> = *const T;
}

impl ProjectKind for Exclusive {
    type Ptr<T: ?Sized> = *mut T;
}

pub trait Projectable {
    type Target;
}

pub unsafe trait Project<Kind: ProjectKind>: Projectable {
    type Output<'a, F: Field<Base = Self::Target>>;

    unsafe fn project<'a, F: Field<Base = Self::Target>>(
        this: Kind::Ptr<Self>,
    ) -> Self::Output<'a, F>;
}

We would need some more compiler magic to ensure that nobody implements this trait generically, so impl<K> Project<K> for MyType, to keep our approach extensible (this could be an attribute if it is also useful in other cases #[rustc_deny_generic_impls]).

The benefit of merging the definitions is that we only have one single trait that we need to document and we could also add documentation on the ProjectKind types. There are also ergonomic downsides, for example all output types are now called Output and thus need to be fully qualified if multiple projection impls exist (<MyType as Project<Exclusive>>::Output<'_, F> vs MyType::OutputExclusive<'_, F>).

To make this proposal compatible with moving projections, we also either need more compiler magic to ensure that if Kind = Move we require Self: DropHusk. Or we could use associated traits and add one to ProjectKind that's then used in Project (Kind = Shared would then set this to Pointee).

This approach also makes me think a bit more about the syntax, if we discover more projections in the future, it might make sense to go for an extensible approach, like @keyword expr{->,.@,.,~}ident (so for example @move x->y or @mut x.y).

Comment by @BennoLossin posted on 2025-11-06:

A new Perspective: Projections via Places

@Nadrieril opened this zulip thread with the idea that "The normal rust way to reborrow a field uses places". He then proceeded to brainstorm a similar design for field projections with a crucial difference: making places the fundamental building block. We had a very long discussion in that thread (exchanging the existing ideas about field projection and the novel place-involving ones) that culminated in this awesome writeup by @Nadrieril: https://hackmd.io/[@Nadrieril][]/HJ0tuCO1-e. It is a very thorough document, so I will only be able to summarize it partially here:

  • instead of the Project* traits, we have the Place* traits which govern what kind of place operations are possible on *x given x: MySmartPtr, those are reading, writing and borrowing.
  • we can allow custom smart pointer reborrowing possibly using the syntax @MySmartPtr <place-expr>
  • we need multi-projections to allow simultaneous existence of &mut x.field.a and &mut x.field.b

We still have many things to flesh out in this proposal (some of these pointed out by @Nadrieril):

  • how do FRTs still fit into the equation? And what are the types implementing the Projection trait?
  • What do we do about non-indirected place containers like MaybeUninit<T>, UnsafeCell<T> and ManuallyDrop<T>?
  • does BorrowKind work as a model for the borrow checker?
  • how do we make match ergonomics work nicely?
  • how do we get around the orphan rule limitations?
  • several smaller issues/questions...

This is a very interesting viewpoint and I'm inclined to make this the main proposal idea. The traits are not too different from the current field projection design and the special borrow checker behavior was also intended at least for the first level of fields. So this is a natural evolution of the field projection proposal. Thanks a lot to @Nadrieril for the stellar writeup!

Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

1 detailed update available.

Comment by @aapoalas posted on 2025-11-11:

We've worked towards coherence checking of the CoerceShared trait, and have come to a conclusion that (at least as a first step) only one lifetime, the first one, shall participate in reborrowing. Problems abound with how to store the field mappings for CoerceShared.

"Flexible, fast(er) compilation"

Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @davidtwco posted on 2025-11-22:

Our first RFC - rust-lang/rfcs#3873 - is in the FCP process, waiting on boxes being checked. rust-lang/rfcs#3874 and rust-lang/rfcs#3875 are receiving feedback which is being addressed.

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress Will not complete
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

No detailed updates available.
Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

No detailed updates available.
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress Will not complete
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

@dropbear32, @osiewicz

1 detailed update available.

Comment by @yaahc posted on 2025-11-21:

linking this here so people know why there hasn't been any progress on this project goal.

#t-compiler > 2025H2 Goal Review @ 💬

"Higher-level Rust"

Progress
Point of contact

Niko Matsakis

Champions

compiler (Santiago Pastorino), lang (Niko Matsakis)

Task owners

Niko Matsakis, Santiago Pastorino

2 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

Three new blog posts:

The most important conclusions from those posts are

  • Explicit capture clauses would be useful, I proposed one specific syntax but bikeshedding will be required. To be "ergonomic" we need the ability to refer to full places, e.g., move(cx.foo.clone()) || use(cx.foo).
  • We should consider Alias or Share as the name for Handle trait; I am currently leaning towards Alias because it can be used as both a noun and a verb and is a bit more comparable to clone -- i.e., you can say "an alias of foo" just like you'd say "a clone of foo".
  • We should look for solutions that apply well to clone and alias so that higher-level Rust gets the ergonomic benefits even when cloning "heavier-weight" types to which Alias does not apply.
Comment by @nikomatsakis posted on 2025-11-12:

New blog post:

  • https://smallcultfollowing.com/babysteps/blog/2025/11/10/just-call-clone/

Exploring one way to make things more ergonomic while remaining explicit, which is to make .clone() and .alias() (1) understood by move closure desugaring and (2) optimized away when redundant.

Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-11-21:

Key developments

  • rust-lang/rust#148051

Blockers:

  • rustdoc deciding on and implementing how they want frontmatter handled in doctests

"Unblocking dormant traits"

Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

No detailed updates available.
In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

1 detailed update available.

Comment by @Darksonn posted on 2025-11-14:

On Nov 12th, there was a mini-design meeting organized by Xiangfei Ding on inplace initialization. The attendees were Xiangfei Ding, Alice Ryhl, Benno Lossin, Tyler Mandry, and Taylor Cramer.

We discussed this document: https://hackmd.io/@rust-for-linux-/H11r2RXpgl

Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

1 detailed update available.

Comment by @lcnr posted on 2025-11-13:

The new solver is now officially used by Rust Analyzer: https://rust-analyzer.github.io/thisweek/2025/10/27/changelog-299.html. A huge shoutout to Jack Huey Chayim Refael Friedman Shoyu Vanilla and Laurențiu Nicola for that work.

On the rustc end Rémy Rakic spent a lot of time triaging the most recent crater run. This uncovered a bunch of new edge cases, resulting in 6 new tracked issues.

We've also merged fixes for 4 minor issues over the last 3 weeks: https://github.com/rust-lang/rust/pull/148292 https://github.com/rust-lang/rust/pull/148173 https://github.com/rust-lang/rust/pull/147840. Thanks to Jana Dönszelmann, tiif and @adwinwhite for implementing these. @adwinwhite was also instrumental in diagnosing the underlying issue of https://github.com/rust-lang/trait-system-refactor-initiative/issues/245.

Going forward, we intend to continue the crater triage while fixing remaining issues until we're ready for stabilization :> the remaining issues are tracked in https://github.com/orgs/rust-lang/projects/61/views/1.

Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

1 detailed update available.

Comment by @lqd posted on 2025-11-25:

Key developments:

  • I prototyped building blocks to fix the liveness soundness issue, but this was deemed too brittle.
  • so we prepared a meeting for the types team to discuss the problem, and possible solutions.
  • it turns out the issue is related to another soundness issue for opaque types in the new trait solver, https://github.com/rust-lang/trait-system-refactor-initiative/issues/159, and that tiif is already working on. The same solution is needed for both issues: with the full implied bounds available for opaque types in liveness, we'll able to require all the regions outliving the opaque lower bound to be live, while ignoring the unrelated regions (that the hidden type cannot use anyway). There will be no relevant dead region through which loans flow, and code relying on unused lifetimes being dead (like a lot of ed2024 code with the default capture changes) will still compile
  • we prepared another types-team meeting to discuss polonius in general, and the alpha algorithm in particular, to share knowledge among the team. This will also be helpful to then on apply member constraints in a location-sensitive manner, since right now they're applied at the SCC level and we need to make sure these constraints with the choice regions are present in the localized subset graph.
  • niko and tiif have made a lot of progress on adding support for borrow checking in a-mir-formality, so I've also joined these meetings, since we'll also want to model the alpha.
  • I've looked into Prusti's Place Capability Graphs, and plan to see how to integrate the alpha there, and if possible with the fuzzing capabilities mentioned in the paper, with the usual goal to expand testing as we've mentioned many times
  • we also had some discussion for a possible masters' student project, and thought about different practical and theoretical topics

Goals looking for help


Other goal updates

Progress Completed
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-11-21:

Done in https://github.com/rust-lang/rust-forge/pull/852.

Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

3 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

tiif and I have been meeting weekly here and pushing changes to the living-large branch of a-mir-formality/nikomatsakis.

We are making progress, we have a minirust type checker and the start of a borrow checker. We've decided to try to use a "judgment-like" approach rather than modeling this as dataflow, as I believe it will give greater insight into the "structure" of the trait checker.

Comment by @nikomatsakis posted on 2025-11-12:

tiif, Jack Huey, and I met today and did more work on the "living-large" branch. The borrow checker judgments are taking shape. My expectation is that we will walk the CFG, tracking the sets of borrows that have occurred so far. At each statement, we will have a judgment that looks at (a) the subtyping relations generated by the type check (flow-insensitive, like NLL); (b) the loans issued so far and not killed; and (c) the live places that may be accessed later. We'll require then that if you are accessing a place P, then there are no loans accessible from a live place that have borrowed P in an incompatible way.

Comment by @nikomatsakis posted on 2025-11-19:

Continued work this week:

Elaborated some on the definition of the when an access or a statement is valid. We are working our way towards what we believe will be a "largely accurate" model of today's NLL -- obviously we'll then want to test it and compare behavior around various edge cases.

C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

1 detailed update available.

Comment by @baumanj posted on 2025-11-26:

Key developments: What has happened since the last time. It's perfectly ok to list "nothing" if that's the truth, we know people get busy.

Nothing! This is the first update and I have yet to focus attention on the project goal. For context, I am employed by the Rust Foundation leading the C++ Interoperability initiative and so far have been executing against the strategy detailed in the problem statement. Owing to greater than anticipated success and deadlines related to WG21 meetings, I've been focusing on the Social Interoperability strategy recently. I have just reached a point where I can turn more attention to the other strategies and so expect to make progress on this goal soon.

Blockers: List any Rust teams you are waiting on and what you are waiting for.

None; I'm getting excellent support from the Project in everything I'm doing. My successes thus far would not have been possible without them, and there are too many to enumerate in this space. There will be a blog post coming soon detailing the past year of work in the initiative where I intend to go into detail. Watch this space for updates.

Help wanted: Are there places where you are looking for contribution or feedback from the broader community?

I am always interested in contribution and feedback. If you're interested, please reach out via interop@rustfoundation.org or t-lang/interop.

Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

2 detailed updates available.

Comment by @BoxyUwU posted on 2025-11-05:

Since the lang meeting most progress on this project goal has been unrelated to adt_const_params.

There's been a large amount of work on min_generic_const_args, specifically Noah Lev's PR (rust-lang/rust#139558) which once landed the core of the impl work for the feature will be done. I've reviewed it together with Oliver Scherer and it's pretty much ready to go other than some small reviews.

Once this PR lands I'm hoping that there should be a fair amount of "smallish" PRs that can be made which could be a good set of PRs to mentor new-ish contributors on.

Comment by @BoxyUwU posted on 2025-11-29:

Once again most progress here has been on min_generic_const_args.

Noah Lev's PR (rust-lang/rust#139558) has now landed, as well as an additional PR of his: rust-lang/rust#148716. Between the two of these the core impl should be "mostly done" now, atleast with no additional feature gates enabled :).

The next big step is to make the min_generic_const_args prototype work well with adt_const_params which I've implemented myself in rust-lang/rust#149136 and rust-lang/rust#149114. These PRs still need to be reviewed but the bulk of the impl work there is now done. These PRs allow for constructing ADTs where the field values may themselves be const parameters or non-concrete uses of type_consts (ie the values are const argument positions).

Once my PRs have landed I would consider mgca as a prototype to be truly "done" though not done as an actual feature. Huge thanks to camelid for sticking through a bunch of fairly painful PRs to get us to this point.

Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

2 detailed updates available.

Comment by @obi1kenobi posted on 2025-11-02:

Status update as of November 1

Key developments:

  • Draft PR for exposing implied bounds in rustdoc JSON: https://github.com/rust-lang/rust/pull/148379
  • A concrete plan for how that new info turns into dozens of new lints covering many kinds of bounds

Linting ?Sized and 'static bounds turned out to be quite a bit more complex than I anticipated. The key issue is that seeing T: Foo + ?Sized does not guarantee that T can be unsized, since we might have Foo: Sized which renders the ?Sized relaxation ineffective. Similarly, seeing T: Foo might also non-obviously imply T: 'static via a similar implied bound.

Failure to correctly account for implied bounds would lead to catastrophic false-positives and false-negatives. For example, changing T: Foo to T: Foo + 'static could be a major breaking change or a no-op, depending on whether we have Foo: 'static (either directly or implicitly via other trait bounds).

We cannot determine implied bounds using information present in rustdoc JSON today, so the rustdoc team and I have been iterating on the best way to compute and include that information in rustdoc JSON. Assuming something similar to the aforementioned PR becomes part of rustdoc JSON, cargo-semver-checks stands to gain several dozen new lints covering these tricky cases over trait associated types, generic type parameters, and APIT/RPIT/RPITIT.

Comment by @obi1kenobi posted on 2025-11-23:

Google Summer of Code 2025 is complete + finally some movement on cross-crate linting! 🚀

Key developments

  • Two students had a successful conclusion of Google Summer of Code working on cargo-semver-checksfind more details here!
  • rustdoc JSON now includes rlib information, following the design for cross-crate rustdoc JSON info created at RustWeek 2025: https://github.com/rust-lang/rust/pull/149043
  • A cargo issue was discovered that prevents this rlib info from being used; it's currently being triaged: https://github.com/rust-lang/cargo/issues/16291
  • Once that's resolved, we'll have enough here for a basic prototype. Getting features right in dependencies will likely require more work due to having many more cargo-related edge cases.
Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

2 detailed updates available.

Comment by @PLeVasseur posted on 2025-11-05:

Meeting minutes from meeting held on 2025-10-31 (thank you to Tomas Sedovic 🥰)

Top-level:

  • Keep high quality bar, merge small, well-vetted changes when possible
  • Need concentrated effort to get the 1.90 FLS updates merged
  • Once 1.90 merged, we attempt first go as a team at 1.91

Discussion:

  • Suggest that everyone read the Glossary as a starting point
  • How to best triage / handle incoming issues?
Comment by @PLeVasseur posted on 2025-11-21:

Meeting notes here: 2025-11-14 - t-fls Meeting

Key developments: PR merged for 1.90 update of the FLS. We're preparing now to work on the 1.91 update of the FLS. Blockers: None currently Help wanted: Anyone that's familiar with the Rust Reference is more than encouraged to read through the FLS to get a sense of it and where further alignment may be possible. Feel free to open issues on the FLS repo as you find things.

Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

1 detailed update available.

Comment by @icmccorm posted on 2025-11-11:

We've posted a pre-RFC for feedback, and we'll continue updating and expanding the draft here. This reflects most of the current state of the implementation, aside from tracking interior mutability precisely, which is still TBD but is described in the RFC.

Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

1 detailed update available.

Comment by @joshtriplett posted on 2025-11-12:

We're putting together a prototype/demo of our reference changes at https://rust-lang.github.io/project-goal-reference-expansion/ . This includes a demonstration of tooling changes to provide stability markers (both "documenting unstable Rust" and "unstable documentation of stable Rust").

Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-11-21:

Key developments:

  • libtest2:
    • #[test] macro added
    • Support for should_panic
    • Support for ignore
    • Support for custom error types
    • compile-fail tests for macros

Blockers

  • None

Help wanted:

Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

1 detailed update available.

Comment by @ZuseZ4 posted on 2025-11-19:

Automatic Differentiation

Time for the next update. By now, we've had std::autodiff for around a year in upstream rustc, but not in nightly. In order to get some more test users, I asked the infra team to re-evaluate just shipping autodiff as-is. This means that for the moment, we will increase the binary size of rustc by ~5%, even for nightly users who don't use this feature. We still have an open issue to avoid this overhead by using dlopen, please reach out if you have time to help. Thankfully, my request was accepted, so I spent most of my time lately preparing that release.

  1. As part of my cleanup I went through old issues, and realized we now partly support rlib's! That's a huge improvement, because it means you can use autodiff not only in your main.rs file, but also in dependencies (either lib.rs, or even rely on crates that use autodiff). With the help of Ben Kimock I figured out how to get the remaining cases covered, hopefully the PR will land soon.
  2. I started documentation improvements in https://github.com/rust-lang/rust/pull/149082 and https://github.com/rust-lang/rust/pull/148201, which should be visible on the website from tomorrow onwards. They are likely still not perfect, so please keep opening issues if you have questions.
  3. We now provide a helpful error message if a user forgets enabling lto=fat: https://github.com/rust-lang/rust/pull/148855
  4. After two months of work, @sgasho managed to add Rust CI to enzyme! Unfortunately, Enzyme devs broke and disabled it directly, so we'll need to talk about maintaining it as part of shipping Enzyme in nightly.

I have the following elements on my TODO list as part shipping AD on nightly

  1. Re-enable macOS build (probably easy)
  2. Talk with Enzyme Devs about maintenance
  3. Merge rlib support (under review)
  4. upstream ADbenchmarks from r-l/enzyme to r-l/r as codegen tests (easy)
  5. Write a block post/article for https://blog.rust-lang.org/inside-rust/

GPU offload

  1. The llvm dev talk about GPU programming went great, I got to talk to a lot of other developers in the area of llvm offload. I hope to use some of the gained knowledge soon. Concrete steps planned are the integration of libc-gpu for IO from kernels, as well as moving over my code from the OpenMP API to the slightly lower level liboffload API.
  2. We confirmed that our gpu offload prototype works on more hardware. By now we have the latest AMD APU generation covered, as well as an MI 250X and an RTX 4050. My own Laptop with a slightly older AMD Ryzen 7 PRO 7840U unfortunately turned out to be not supported by AMD drivers.
  3. The offload intrinsic PR by Marcelo Domínguez is now marked as ready, and I left my second round of review. Hopefully, we can land it soon!
  4. I spend some time trying to build and potentially ship the needed offload changes in nightly, unfortunately I still fail to build it in CI: https://github.com/rust-lang/rust/pull/148671.

All in all, I think we made great progress over the last month, and it's motivating that we finally have no blockers left for flipping the llvm.enzyme config on our nightly builds.

Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

2 detailed updates available.

Comment by @tomassedovic posted on 2025-11-19:

Update from the 2025-11-05 meeting.

-Zharden-sls / rust#136597

Wesley Wiser left a comment on the PR, Andr

-Zno-jump-tables / rust#145974

Merged, expected to ship in Rust 1.93. The Linux kernel added support for the new name for the option (-Cjump-tables=n).

Comment by @tomassedovic posted on 2025-11-28:

Update form the 2025-11-19 meeting:

-Zharden-sls / rust#136597

Andrew addressed the comment and rebased the PR. It's waiting for a review again.

#![register_tool] / rust#66079

Tyler Mandry had an alternative proposal where lints would be defined in an external crate and could be brought in via use or something similar: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/namespaced.20tool.20attrs.

A concern people had was the overhead of having to define a new crate and the potential difficulty with experimenting on new lints.

Tyler suggested adding this as a future possibility to RFC#3808 and FCPing it.

Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

2 detailed updates available.

Comment by @tomassedovic posted on 2025-11-19:

Update from the 2025-11-05 meeting.

Deref/Receiver

Ding Xiang Fei posted his reasoning for the trait split in the Zulip thread and suggested adding a second RFC to explain.

TC recommended writing a Reference PR. The style forces one to explain the model clearly which should then make writing the RFC easier.

The lang experiment PR for arbitrary self types have feature gates for the two options we're exploring.

Arbitrary Self Types and derive(CoercePointee) / tracking issue #44874

theemathas opened an issue derive(CoercePointee) accepts ?Sized + Sized #148399. This isn't a critical issue, just an error that arguably should be a lint.

Boxy opened a fix for a derive(CoercePointee) blocker: Forbid freely casting lifetime bounds of dyn-types .

RFC #3851: Supertrait Auto-impl

Ding Xiang Fei is working on the implementation (the parser and HIR interface for it). Ding's also working on a more complete section dedicated to questions raised by obi1kenobi

Field projections

Benno Lossin has been posting super detailed updates on the tracking issue

We've discussed the idea of virtual places (see Zulip thread where they were proposed).

Inlining C code into Rust code

Matt Mauer had an idea to compile C code into LLVM bytecode (instead of object file) and then the llvm-link tool to merge them together and treat everything in the second bytecode file as a static inlined function. Matt suggested we could integrate this into the rustc passes.

This would make it easy to inline certain functions into Rust code without full LTO.

Relevant Zulip thread.

This sounds like a good candidate for the next Project Goals period.

Comment by @tomassedovic posted on 2025-11-28:

Update from the 2025-11-19 meeting.

rustdoc checking for private and hidden items (rust##149105 & rust#149106)

Miguel proposed Rust Doc checking for invalid links to items that are hidden or private even if no docs are built for them. This can help catch typos or dead links because the docs became out of date.

Guillaume was much more open to this being a toggle, lolbinarycat opened a PR here: https://github.com/rust-lang/rust/pull/141299

unsafe_op_in_unsafe_fn not respected in imported declarative macros rust#112504

This lint doesn't trigger when importing a declarative macro that's calling unsafe code without having an unsafe block and without a SAFETY comment.

The lint is only triggered when the macro was actually used.

Fix for imports_granularity is not respected for #[cfg]'d items / rustfmt#6666

Ding opened a PR to fix this: https://github.com/rust-lang/rustfmt/issues/6666

rustfmt trailing comma hack

Ding and Manish were talking about writing up a proper fix for the vertical layout that's currently being solved by the , //, hack

TypeId layout

This has been discussed in https://github.com/rust-lang/rust/pull/148265 and https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/TypeID.20design/near/560189854.

Apiraino proposed a compiler design meeting here: https://github.com/rust-lang/compiler-team/issues/941. That meeting has not been scheduled yet, though.

Deref / Receiver

Following TC's recommendation, Ding is drafting the Reference PR.

Arbitrary Self Types and derive(CoercePointee)

Ding opened a PR to fix unsoundness in the DispatchFromDyn trait: https://github.com/rust-lang/rust/pull/149068

Theemathas opened a question on whether Receiver should by dyn-compatible: https://github.com/rust-lang/rust/issues/149094

RFC #3848: Pass pointers to const in assembly

Merged!

In-place initialization

Benno noted that Effects and In-place Init are not compatible with each other: https://rust-lang.zulipchat.com/#narrow/channel/528918-t-lang.2Fin-place-init/topic/Fundamental.20Issue.20of.20Effects.20and.20In-place-init/with/558268061

This is going to affect any in-place init proposal.

Benno proposes fixing this with keyword generics. This is a topic that will receive a lot of discussion doing forward.

Alice has been nominated and accepted as language-advisor. Fantastic news and congratulations!

Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

No detailed updates available.
MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

1 detailed update available.

Comment by @Amanieu posted on 2025-11-15:

An RFC draft covering the MIR changes necessary to support this optimization has been written and is currently being reviewed by T-opsem. It has already received one round of review and the feedback has been incorporated in the draft.

Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

No detailed updates available.
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

2 detailed updates available.

Comment by @weihanglo posted on 2025-11-04:

Instead of using a full-fledged database like SQLite, we switched to a basic JSONL-based logging system to collect build metrics. A simple design doc can be found here: https://hackmd.io/K5-sGEJeR5mLGsJLXqsHrw.

Here are the recent pull requests:

  • https://github.com/rust-lang/cargo/pull/16150
  • https://github.com/rust-lang/cargo/pull/16179

To enable it, set CARGO_BUILD_ANALYSIS_ENABLED=true or set the Cargo config file like this:

[build.analysis]
enabled = true

As of today (nightly-2025-11-03), it currently emits build-started and timing-info two log events to $CARGO_HOME/log/ (~/.cargo/log/ by default). The shape of timing-info JSON is basically the shape of the unstable --timing=json. I anticipate when this is stabilized we don't need --timing=json.

The build.analysis.enable is a non-blocking unstable feature. Unless bugs, should be able to set unconditionally even on stable toolchain. When not supported, it would just warn the unknown config merely.

Comment by @weihanglo posted on 2025-11-24:

Key developments: Started emitting basic fingerprint information, and kicked off the refactor of rendering HTML timing report for future report replay through cargo report timings command.

  • https://github.com/rust-lang/cargo/pull/16203
  • https://github.com/rust-lang/cargo/pull/16282

Blockers: no except my own availability

Help wanted: Mendy on Zulip brought up log compression (#t-cargo > build analysis log format @ 💬) but I personally don't have time looking at it durnig this period. Would love to see people create an issue in rust-lang/cargo and help explore the idea.

reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

1 detailed update available.

Comment by @nikomatsakis posted on 2025-11-12:

Another related PR:

https://github.com/rust-lang/rust/pull/148820

Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

1 detailed update available.

Comment by @ranger-ross posted on 2025-11-21:

Status update November 21, 2025

October was largely spent working out design details of the build cache and locking design.

https://github.com/rust-lang/cargo/pull/16155 was opened with an initial implementation for fine grain locking for Cargo's build-dir however it needs to be reworked after the design clarifications mentioned above.

In November I had a change of employer so I my focus was largely on that. However, we did make some progress towards locking in https://github.com/rust-lang/cargo/pull/16230 which no longer lock the artifact-dir for cargo check. This is expected to land in 1.93.0.

I'm hoping to push fine grain locking forward later this month and in December.

Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress Completed
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-11-19:

This project goal has been completed. I updated the first issue to reflect it. Closing the issue then.

Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

1 detailed update available.

Comment by @jakos-sec posted on 2025-11-21:

We've had a bunch of discussions and I opened a MCP (link, zulip).

I think the final sentiment was creating new targets for the few sanitizers and platforms that are critical. I'm in the process of prototyping something to get new feedback on it.

Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

1 detailed update available.

Comment by @nikomatsakis posted on 2025-11-05:

Update:

Jack Huey has been doing great work building out a system for analyzing interviews. We are currently looking at slicing the data along a few dimensions:

  • What you know (e.g., experience in other languages, how much experience with Rust)
  • What you are trying to do (e.g., application area)
  • Where you are trying to do it (e.g., country)

and asking essentially the same set of questions for each, e.g., what about Rust worked well, what did not work as well, what got you into Rust, etc.

Our plan is to prepare a draft of an RFC with some major conclusions and next steps also a repository with more detailed analysis (e.g., a deep dive into the Security Critical space).

rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Kobzol posted on 2025-11-19:

The new system has been running in production without any major issues for a few weeks now. In a few weeks, I plan to start using the second collector, and then announce the new system to Project members to tell them how they can use its new features.

Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

No detailed updates available.
SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

2 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

Notes from our meeting today:

Syntax proposal: only keyword

We are exploring the use of a new only keyword to identify "special" bounds that will affect the default bounds applied to the type parameter. Under this proposal, T: SizeOfVal is a regular bound, but T: only SizeOfVal indicates that the T: const Sized default is suppressed.

For the initial proposal, only can only be applied to a known set of traits; one possible extension would be to permit traits with only supertraits to also have only applied to them:

trait MyDeref: only SizeOfVal { }
fn foo<T: only MyDeref>() { }

// equivalent to

trait MyDeref: only SizeOfVal { }
fn foo<T: MyDeref + only SizeOfVal>() { }

We discussed a few other syntactic options:

  • A ^SizeOfVal sigil was appealing due to the semver analogy but rejected on the basis of it being cryptic and hard to google.
  • The idea of applying the keyword to the type parameter only T: SizeOfVal sort of made sense, but it would not compose well if we add additional families of "opt-out" traits like Destruct and Forget, and it's not clear how it applies to supertraits.

Transitioning target

After testing, we confirmed that relaxing Target bound will result in significant breakage without some kind of transitionary measures.

We discussed the options for addressing this. One option would be to leverage "Implementable trait aliases" RFC but that would require a new trait (Deref20XX) that has a weaker bound an alias trait Deref = Deref20XX<Target: only SizeOfVal>. That seems very disruptive.

Instead, we are considering an edition-based approach where (in Rust 2024) a T: Target bound is defaulted to T: Deref<Target: only SizeOfVal> and (in Rust 20XX) T: Target is defaulted to T: Deref<Target: only Pointee>. The edition transition would therefore convert bounds to one of those two forms to be fully explicit.

One caveat here is that this edition transition, if implemented naively, would result in stronger bounds than are needed much of the time. Therefore, we will explore the option of using bottom-up analysis to determine when transitioning whether the 20XX bound can be used instead of the more conservative 2024 bound.

Supertrait bounds

We explored the implications of weakening supertrait bounds a bit, looking at this example

trait FooTr<T: ?Sized> {}

struct Foo<T: ?Sized>(std::marker::PhantomData<T>);

fn bar<T: ?Sized>() {}

trait Bar: FooTr<Self> /*: no longer MetaSized */ {
  //       ^^^^^^^^^^^ error!
    // real examples are `Pin` and `TypeOf::of`:
    fn foo(&self, x: Foo<Self>) {
        //        ^^^^^^^^^^^^ error!
        bar::<Self>();
        // ^^^^^^^^^^ error!
          
      
        // real examples are in core::fmt and core::iter:
        trait DoThing {
            fn do_thing() {}
        }
        
        impl<T: ?Sized> DoThing for T {
            default fn do_thing() {}
        }
        
        impl<T: Sized> DoThing for T {
            fn do_thing() {}
        }
        
        self.do_thing();
        // ^^^^^^^^^^^^^ error!
        // specialisation case is not an issue because that feature isn't stable, we can adjust core, but is a hazard with expanding trait hierarchies in future if stabilisation is ever stabilised
    }
}

The experimental_default_bounds work originally added Self: Trait bounds to default methods but moved away from that because it could cause region errors (source 1 / source 2). We expect the same would apply to us but we are not sure.

We decided not to do much on this, the focus remains on the Deref::Target transition as it has more uncertainty.

Comment by @davidtwco posted on 2025-11-22:

No progress since [Niko Matsakis's last comment](https://github.com/rust-lang/rust-project-goals/issues/270#issuecomment-3492255970) - intending to experiment with resolving challenges with Deref::Target and land the SVE infrastructure with unfinished parts for experimentation.

Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

2 detailed updates available.

Comment by @BoxyUwU posted on 2025-11-05:

A bit late on this update but I've sat down with lcnr a little while back and we tried to come up with a list of topics that we felt fell under type system documentation. This is an entirely unordered list and some topics may already be adequately covered in the dev guide already.

Regardless this effectively serves as a "shiny future" for everything I'd like to have documentation about somewhere (be it dev guide or in-tree module level documentation):

  • opaque types
    • non defining vs defining uses
    • member constraints (borrowck overlap)
    • checking item bounds
    • high level normalization/opaque type storage approach (new solver)
    • normalization incompleteness
    • method/function incompleteness
    • how does use<...> work
    • 'erased regions causes problems with outlives item bounds in liveness
    • consistency across defining scopes
    • RPITIT inference? does this have special stuff
    • capturing of bound vars in opaques under binders, Fn bounds are somewhat special in relation to this
    • opaques inheriting late bound function parameters
  • non opaque type, impl Trait
    • RPITIT in traits desugaring
    • impl Trait in bindings
    • APIT desugaring impl details
  • const generics
    • anonymous constants
    • ConstArgHasType
    • TSVs vs RVs and generally upstream doc from lang meeting to dev guide
    • deterministic CTFE requirement
  • HIR typeck
    • expectations (and how used incorrectly :3)
    • method lookup + assorted code cleanups
    • coercions
    • auto-deref/reborrows (in coercions/method selection)
    • closure signature inference
    • fudge_inference_if_ok :>
    • diverging block handling :3
    • fallback :3
  • MIR borrowck
    • MIR typeck
      • why do we want two typecks
      • region dependent goals in new solver (interaction with lack-of region uniquification)
    • overlaps with opaque types
    • compute region graph
    • closure requirements
    • borrowck proper
  • compare predicate entailment :>
    • param env jank
    • implied bounds handling
  • trait objects: recent FCPs :3
    • dyn compatibility soundness interactions (see coerce pointee/arbitrary self types stuff)
    • dyn compatibility for impl reasons (monomorphization)
    • projection bounds handling
    • args not required for wf
  • ty::Infer in ty overview
  • generalization
  • coroutines
    • deferred coroutine obligations
    • witness types?
    • why -Zhigher-ranked-assumptions exists
  • binders and universes existsA forallB A == B
    • build more of an intuition than current docs :thinking_face:
  • talk about hr implied bounds there/be more explicit/clear in https://rustc-dev-guide.rust-lang.org/traits/implied-bounds.html?highlight=implied#proving-implicit-implied-bounds
  • incompleteness
    • what is it
    • what kinds are OK (not entirely sure yet. small explanation and add a note)
  • trait solving
    • cycles
    • general overview of how trait solving works as a concept (probably with example and handwritten proof trees)
      • important: first go "prove stuff by recursively proving nested requirements", then later introduce candidates
      • clauses/predicates
    • running pending goals in a loop
    • what kinds of incompleteness (overlap with opaques)
    • builtin impls and how to add them
  • hir to ty lowering :>
    • itemctxt vs fnctxt behaviours
    • normalization in lowering
    • lowering should be lossy
    • idempotency(?)
    • cycles from param env construction
    • const generics jank about Self and no generic parameters allowed
  • well formedness checking + wf disambiguation page
  • normalization & aliases
    • be more clear about normalizing ambig aliases to infer vars :thinking_face:
    • normalize when equating infer vars with aliases (overlap with generalization?)
    • item bounds checking
    • interactions with implied bounds (overlap with implied bounds and hir ty lowering)
  • variance

Since making this list I've started working on writing documentation about coercions/adjustments. So far this has mostly resulted in spending a lot of time reading the relevant code in rustc. I've discovered a few bugs and inconsistencies in behaviour and made some nice code cleanups which should be valuable for people learning how coercions are implemented already. This can be seen in #147565

I intend to start actually writing stuff in the dev guide for coercions/adjustments now as that PR is almost done.

I also intend to use a zulip thread (#t-compiler/rustc-dev-guide > Type System Docs Rewrite) for more "lightweight" and informal updates on this project goal, as well as just miscellaneous discussion about work relating to this project goal

Comment by @BoxyUwU posted on 2025-11-29:

I've made a tracking issue on the dev guide repo for this project goal: rust-lang/rustc-dev-guide#2663. I've also written documentation for coercions: rust-lang/rustc-dev-guide#2662. There have been a few extra additions to the list in the previous update.

Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

Wladimir PalantUnpacking VStarcam firmware for fun and profit

One important player in the PPPP protocol business is VStarcam. At the very least they’ve already accumulated an impressive portfolio of security issues. Like exposing system configuration including access password unprotected in the Web UI (discovered by multiple people independently from the look of it). Or the open telnet port accepting hardcoded credentials (definitely discovered by lots of people independently). In fact, these cameras have been seen used as part of a botnet, likely thanks to some documented vulnerabilities in their user interface.

Is that a thing of the past? Are there updates fixing these issues? Which devices can be updated? These questions are surprisingly hard to answer. I found zero information on VStarcam firmware versions, available updates or security fixes. In fact, it doesn’t look like they ever even acknowledged learning about the existence of these vulnerabilities.

No way around downloading these firmware updates and having a look for myself. With surprising results. First of all: there are lots of firmware updates. It seems that VStarcam accumulated a huge number of firmware branches. And even though not all of them even have an active or downloadable update, the number of currently available updates goes into hundreds.

And the other aspect: the variety of update formats is staggering, and often enough standard tools like binwalk aren’t too useful. It took some time figuring out how to unpack some of the more obscure variants, so I’m documenting it all here.

Warning: Lots of quick-and-dirty Python code ahead. Minimal error checking, use at your own risk!

ZIP-packed incremental updates

These incremental updates don’t contain an image of the entire system, only the files that need updating. They always contain the main application however, which is what matters.

Recognizing this format is easy, the files start with the 32 bytes www.object-camera.com.by.hongzx. or www.veepai.com/design.rock-peng. (the old and the new variant respectively). The files end with the same string in reverse order. Everything in between is a sequence of ZIP files, with each file packed in its own ZIP file.

Each ZIP file is preceded by a 140 byte header: 64 byte directory name, 64 byte file name, 4 byte ZIP file size, 4 byte timestamp of some kind and 4 zero bytes. While binwalk can handle this format, having each file extracted into a separate directory structure isn’t optimal. A simple Python script can do better:

#!/usr/bin/env python3
import datetime
import io
import struct
import os
import sys
import zipfile


def unpack_zip_stream(input: io.BytesIO, targetdir: str) -> None:
    targetdir = os.path.normpath(targetdir)
    while True:
        header = input.read(0x8c)
        if len(header) < 0x8c:
            break

        _, _, size, _, _ = struct.unpack('<64s64sLLL', header)
        data = input.read(size)

        with zipfile.ZipFile(io.BytesIO(data)) as archive:
            for member in archive.infolist():
                path = os.path.normpath(
                    os.path.join(targetdir, member.filename)
                )
                if os.path.commonprefix((path, targetdir)) != targetdir:
                    raise Exception('Invalid target path', path)

                try:
                    os.makedirs(os.path.dirname(path))
                except FileExistsError:
                    pass

                with archive.open(member) as member_input:
                    data = member_input.read()
                with open(path, 'wb') as output:
                    output.write(data)

                time = datetime.datetime(*member.date_time).timestamp()
                os.utime(path, (time, time))


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file target-dir', file=sys.stderr)
        sys.exit(1)

    if os.path.exists(sys.argv[2]):
        raise Exception('Target directory exists')

    with open(sys.argv[1], 'rb') as input:
        header = input.read(32)
        if (header != b'www.object-camera.com.by.hongzx.' and
                header != b'www.veepai.com/design.rock-peng.'):
            raise Exception('Wrong file format')
        unpack_zip_stream(input, sys.argv[2])

VStarcam pack system

This format is pretty simple. There is an identical section starting with VSTARCAM_PACK_SYSTEM_HEAD and ending with VSTARCAM_PACK_SYSTEM_TAIL at the start and at the end of the file. This section seems to contain a payload size and its MD5 hash.

There are two types of payload here. One is a raw SquashFS image starting with hsqs. These seem to be updates to the base system: they contain an entire Linux root filesystem and the Web UI root but not the actual application. The matching application lives on a different partition and is likely delivered via incremental updates.

The other variant seems to be used for hardware running LiteOS rather than Linux. The payload here starts with a 16 byte header: compressed size, uncompressed size and an 8 byte identification of the compression algorithm. The latter is usually gziphead, meaning standard gzip compression. After uncompressing you get a single executable binary containing the entire operating system, drivers, and the actual application.

So far binwalk can handle all these files just fine. I found exactly one exception, firmware version 48.60.30.22. It seems to be another LiteOS-based update but the compression algorithm field is all zeroes. The actual compressed stream has some distinct features that make it look like none of the common compression algorithms.

Screenshot of a hexdump showing the first 160 and the last 128 bytes of a large file. The file starts with the bytes 30 c0 fb 54 and looks random except for two sequences of 14 identical bytes: ef at offset 0x24 and fb at offset 0x43. The file ending also looks random except for the closing sequence: ff ff 0f 00 00.

Well, I had to move on here, so that’s the one update file I haven’t managed to unpack.

VeePai updates

This is a format that seems to be used by newer VStarcam hardware. At offset 8 these files contain a firmware version like www.veepai.com-10.201.120.54. Offsets of the payload vary but it is a SquashFS image, so binwalk can be used to find and unpack it.

Normally these are updates for the partition where the VStarcam application resides in. In a few cases these are updating the Linux base system however, no application-specific files from what I could tell.

Ingenic updates

This format seems to be specific to the Ingenic hardware platform, and I’ve seen other hardware vendors use it as well. One noticeable feature here is the presence of a tag partition containing various data sections, e.g. the CMDL section encoding Linux kernel parameters.

In fact, looking for that tag partition within the update might be helpful to recognize the format. While the update files usually start with the 11 22 33 44 magic bytes, they sometimes start with a different byte combination. There is always the firmware version at offset 8 in the file however.

The total size of the file header is 40 bytes. It is followed by a sequence of partitions, each preceded by a 16 byte header where bytes 1 to 4 encode the partition index and bytes 9 to 12 the partition size.

Binwalk can recognize and extract some partitions but not all of them. If you prefer having all partitions extracted you can use a simple Python script:

#!/usr/bin/env python3
import io
import struct
import os
import sys


def unpack_ingenic_update(input: io.BytesIO, targetdir: str) -> None:
    os.makedirs(targetdir)

    input.read(40)
    while True:
        header = input.read(16)
        if len(header) < 16:
            break

        index, _, size, _ = struct.unpack('<LLLL', header)
        data = input.read(size)
        if len(data) < size:
            raise Exception(f'Unexpected end of data')

        path = os.path.join(targetdir, f'mtdblock{index}')
        with open(path, 'wb') as output:
            output.write(data)


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file target-dir', file=sys.stderr)
        sys.exit(1)

    with open(sys.argv[1], 'rb') as input:
        unpack_ingenic_update(input, sys.argv[2])

You will find some partitions rather tricky to unpack however.

LZO-compressed partitions

Some partitions contain a file name at offset 34, typically rootfs_camera.cpio. These are LZO-compressed but lack the usual magic bytes. Instead, the first four bytes contain the size of compressed data in this partition. Once you replace these four bytes by 89 4c 5a 4f (removing trailing junk is optional) the partition can be uncompressed with the regular lzop tool and the result fed into cpio to get the individual files.

Ingenic’s jzlzma compression

Other Ingenic root partitions are more tricky. These also start with the data size but it is followed by the bytes 56 19 05 27 (that’s a uImage signature in reversed byte order). After that comes a compressed stream that sort of looks like LZMA but isn’t LZMA. What’s more: while binwalk will report that the Linux kernel is compressed via LZ4, it’s actually the same strange compression mechanism. The bootloader of these systems pre-dates the introduction of LZ4, so the same compression algorithm identifier was used for this compression mechanism that was later assigned to LZ4 by the upstream version of the bootloader.

What kind of compression is this? I’ve spent some time analyzing the bootloader but it turned out to be a red herring: apparently, the decompression is performed by hardware here, and the bootloader merely pushes the data into designated memory areas. Ugh!

At least the bootloader told me how it is called: jzlzma, which is apparently Ingenic’s proprietary LZMA variant. An LZMA header starts with a byte encoding some compression properties (typically 5D), a 4 byte dictionary size and an 8 byte uncompressed size. Ingenic’s header is missing compression properties, and the uncompressed size is merely 4 bytes. But even accounting for these differences the stream cannot be decompressed with a regular LZMA decoder.

Luckily, with the algorithm name I found tools on Github that are meant to create firmware images for the Ingenic platform. These included an lzma binary which turned out to be an actual LZMA tool from 2005 hacked up to produce a second compressed stream in Ingenic’s proprietary format.

As I found, Ingenic’s format has essentially two differences to regular LZMA:

  1. Bit order: Ingenic encodes bits within bytes in reverse order. Also, some of the numbers (not all of them) are written to the bit stream in reversed bit order.
  2. Range coding: Ingenic doesn’t do any range coding, instead encoding all numbers verbatim.

That second difference essentially turns LZMA into LZ77. Clearly, the issue here was the complexity of implementing probabilistic range coding in hardware. Of course, that change makes the resulting algorithm produce considerably worse compression ratios than LZMA and even worse than much simpler LZ77-derived algorithms like deflate. And there is plenty of hardware to do deflate decompression. But at least they managed to obfuscate the data…

My original thought was “fixing” their stream and turning it into proper LZMA. But range coding is not only complex but also context-dependent, it cannot be done without decompressing. So I ended up just writing the decompression logic in Python which luckily was much simpler than doing the same thing for LZMA proper.

Note: The following script is minimalistic and wasn’t built for performance. Also, it expects a file that starts with a dictionary size (typically the bytes 00 00 01 00), so if you have some header preceding it you need to remove it first. It will also happily “uncompress” any trailing junk you might have there.

#!/usr/bin/env python3
import sys

kStartPosModelIndex, kEndPosModelIndex, kNumAlignBits = 4, 14, 4


def reverse_bits(n, bits):
    reversed = 0
    for i in range(bits):
        reversed <<= 1
        if n & (1 << i):
            reversed |= 1
    return reversed


def bit_stream(data):
    for byte in data:
        for bit in range(8):
            yield 1 if byte & (1 << bit) else 0


def read_num(stream, bits):
    num = 0
    for _ in range(bits):
        num = (num << 1) | next(stream)
    return num


def decode_length(stream):
    if next(stream) == 0:
        return read_num(stream, 3) + 2
    elif next(stream) == 0:
        return read_num(stream, 3) + 10
    else:
        return read_num(stream, 8) + 18


def decode_dist(stream):
    posSlot = read_num(stream, 6)
    if posSlot < kStartPosModelIndex:
        pos = posSlot
    else:
        numDirectBits = (posSlot >> 1) - 1
        pos = (2 | (posSlot & 1)) << numDirectBits
        if posSlot < kEndPosModelIndex:
            pos += reverse_bits(read_num(stream, numDirectBits), numDirectBits)
        else:
            pos += read_num(stream, numDirectBits -
                            kNumAlignBits) << kNumAlignBits
            pos += reverse_bits(read_num(stream, kNumAlignBits), kNumAlignBits)
    return pos


def jzlzma_decompress(data):
    stream = bit_stream(data)
    reps = [0, 0, 0, 0]
    decompressed = []
    try:
        while True:
            if next(stream) == 0:           # LIT
                byte = read_num(stream, 8)
                decompressed.append(byte)
            else:
                size = 0
                if next(stream) == 0:       # MATCH
                    size = decode_length(stream)
                    reps.insert(0, decode_dist(stream))
                    reps.pop()
                elif next(stream) == 0:
                    if next(stream) == 0:   # SHORTREP
                        size = 1
                    else:                   # LONGREP[0]
                        pass
                elif next(stream) == 0:     # LONGREP[1]
                    reps.insert(0, reps.pop(1))
                elif next(stream) == 0:     # LONGREP[2]
                    reps.insert(0, reps.pop(2))
                else:                       # LONGREP[3]
                    reps.insert(0, reps.pop(3))

                if size == 0:
                    size = decode_length(stream)

                curLen = len(decompressed)
                start = curLen - reps[0] - 1
                while size > 0:
                    end = min(start + size, curLen)
                    decompressed.extend(decompressed[start:end])
                    size -= end - start
    except StopIteration:
        return bytes(decompressed)


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file.jzlzma out-file', file=sys.stderr)
        sys.exit(1)

    with open(sys.argv[1], 'rb') as input:
        data = input.read()
    data = jzlzma_decompress(data[8:])
    with open(sys.argv[2], 'wb') as output:
        output.write(data)

The uncompressed root partition can be fed into the regular cpio tool to get the individual files.

Exotic Ingenic update

There was one update using a completely different format despite also being meant for the Ingenic hardware. This one started with the bytes a5 ef fe 5a and had a SquashFS image at offset 0x3000. The unpacked contents (binwalk will do) don’t look like any of the other updates either: this definitely isn’t a camera, and it doesn’t have a PPPP implementation. Given the HDMI code I can only guess that this is a Network Video Recorder (NVR).

But what about these security issues?

As to those security issues I am glad to report that VStarcam solved the telnet issue:

export PATH=/system/system/bin:$PATH
#telnetd
export LD_LIBRARY_PATH=/system/system/lib:/mnt/lib:$LD_LIBRARY_PATH
mount -t tmpfs none /tmp -o size=3m

/system/system/bin/brushFlash
/system/system/bin/updata
/system/system/bin/wifidaemon &
/system/system/bin/upgrade &

Yes, their startup script really has telnetd call commented out. At least that’s usually the case. There are updates from 2018 that are no longer opening the telnet port. There are other updates from 2025 that still do. Don’t ask me why. From what I can tell the hardcoded administrator credentials are still universally present but these are only problematic with the latter group.

It’s a similar story with the system.ini file that was accessible without authentication. Some firmware versions had this file moved to a different directory, others still have it in the web root. There is no real system behind it, and I even doubt that this was a security-induced change rather than an adjustment to a different hardware platform.

Tarek ZiadéAll I Want for Christmas is a Better Alt Text – Part 1

Context: Improving Alt Text for Firefox

Earlier this year, I built the backend for the local alt text generation feature in Firefox. Nearly half of the images on the web still lack alternative text, creating a major accessibility barrier for screen reader users. The goal of this work is straightforward but ambitious: generate high-quality alt text entirely on device, preserving user privacy while improving access to visual content.

The first implementation focused on PDF.js, primarily as a controlled environment to validate the approach. Now that the runtime performance is good enough, the next step is to generalize this capability across the entire browser so that all web images can benefit from meaningful descriptions. Before that generalization, however, improving accuracy is essential.

From a modeling perspective, the system pairs a Vision Transformer (ViT) with DistilGPT-2, a 182-million-parameter language model that fits under 200 MB once quantized. Improving this system involves multiple, often competing dimensions: bias reduction, description accuracy, and inference speed. This post focuses on the data side of the problem, specifically dataset quality and bias. Part 2 will look at model-level improvements for accuracy and performance.

First Round: Removing Bias with GPT-4o

The original image captions contained several recurring issues:

  • Gender bias: skateboarders described as “men”, nurses as “women”
  • Age stereotyping: unnecessary or reductive age descriptors
  • Offensive or outdated language: culturally insensitive terms that no longer belong in a modern dataset

To address this, I used GPT-4o to systematically transform captions from Flickr30k and COCO, removing demographic descriptors that were not visually required. The resulting datasets are available on Hugging Face (Mozilla/flickr30k-transformed-captions-gpt4o) and were used to train the current Firefox local alt text model.

For more background on this initial effort, see the Mozilla Hacks post and the Firefox blog announcement. This is the model that is currently shipping in Firefox.

Second Round: Measuring What Actually Improved

Qualitative panel testing showed that the transformed captions were generally better received by humans, but that only answered part of the question. What exactly improved, by how much, and what problems remained hidden in the data?

This post documents the second round of work, which focused on building systematic measurement tools to:

  1. Quantify how much bias was actually removed
  2. Verify that transformed captions still describe the images accurately
  3. Identify class imbalance and other structural issues
  4. Lay the groundwork for targeted fixes, including synthetic data generation

When training vision-language models, dataset quality is often treated as a secondary concern compared to architecture or training tricks. In practice, the data is the foundation. If the dataset is biased, noisy, or unbalanced, no amount of fine-tuning will fully compensate.

The Problem Space

After the GPT-4o transformation, several open questions remained:

  • Did bias removal actually work in a measurable way?
  • Was semantic meaning preserved during transformation?
  • Did image–text alignment degrade or improve?
  • Are some visual concepts severely underrepresented?
  • Can these checks be repeated reliably for future dataset versions?

Answering these questions requires more than a single score or benchmark.

A Multi-Metric Quality Analysis

I built a dataset quality analysis tool that evaluates four complementary dimensions. The emphasis is on improving the training data itself, rather than compensating for data issues at model time.

1. Image–Text Alignment (CLIP Score)

CLIP provides a convenient proxy for how well a caption matches its corresponding image. By embedding both modalities and computing cosine similarity, I obtain a rough but useful alignment score.

A key improvement in this round was upgrading from CLIP ViT-B/32 to ViT-L/14 @ 336 px. The larger model produces lower absolute scores, but it is significantly more discriminative, making it easier to separate strong alignments from weak ones.

Interpretation guidelines:

  • Excellent: ≥ 0.35
  • Good: 0.30–0.35
  • Fair: 0.25–0.30
  • Poor: < 0.25

On the transformed dataset, I observe scores of 0.311 with ViT-B/32 (Good) and 0.284 with ViT-L/14 @ 336 px (Fair but more informative).

2. Caption Fidelity (BERTScore)

Removing bias should not come at the cost of semantic drift. To verify this, I used BERTScore with a RoBERTa-large backbone to compare original and transformed captions.

Scores above 0.90 generally indicate that the core meaning is preserved. The transformed dataset achieves 0.904, which falls comfortably in the “excellent” range.

3. Bias Detection Before and After

Bias reduction is only meaningful if it can be measured. I tracked mentions of protected attributes across seven categories, including gender, race or ethnicity, nationality, age, religion, sexual orientation, and disability.

By comparing original and transformed captions on the same samples, I can directly quantify the effect of the transformation. On a 1 000-sample evaluation set, gender mentions dropped from 70 percent to zero, race and ethnicity mentions dropped by 97 percent, and nationality mentions were completely eliminated. Age-related terms remain more common, largely because they are often visually relevant, for example when describing children.

4. Object Distribution and Imbalance

Finally, I analyzed object frequency to identify long-tail problems. Using metrics such as the Gini coefficient and Shannon entropy, the tool highlights severe imbalance: thousands of objects appear only a handful of times.

This analysis automatically produces lists of rare objects and sampling weights that can be used for rebalancing during training.

Using CLIP as a Training Signal

Beyond evaluation, CLIP can also be used to guide training directly. I experimented with a combined loss that adds a CLIP-based alignment term to the standard cross-entropy loss for caption generation.

The intuition is simple: encourage the model to generate captions that are not only fluent, but also visually grounded. Early results suggest modest but consistent gains in CLIP score, at the cost of slower training and higher memory usage.

Running the Quality Analysis

The quality analysis tool integrates directly into the project’s Makefile:

# Quick test (100 samples)
make quality-report-quick

# Full analysis on test split
make quality-report SPLIT=test

# Custom analysis
make quality-report SPLIT=train MAX_SAMPLES=1000 OUTPUT_DIR=./my_reports

Example Dataset Quality Report

Below is an excerpt from the generated quality report for the full Flickr30k transformed dataset. It illustrates how the metrics come together in practice.

================================================================================
                             DATASET QUALITY REPORT
================================================================================

Dataset: Mozilla/flickr30k-transformed-captions-gpt4o
Samples: 31 014

IMAGE–TEXT ALIGNMENT (CLIP)
Score: 0.274 ± 0.036   Assessment: FAIR

CAPTION FIDELITY (BERTScore)
Score: 0.899 ± 0.023   Assessment: GOOD

BIAS DETECTION (Original → Transformed)
Gender:         67% → 0%
Race/Ethnicity: 27% → 1%
Nationality:     1% → 0%
Age:            19% → 17%

OBJECT DISTRIBUTION
Gini coefficient: 0.889
Rare classes (<50 samples): 6 210
================================================================================

The report confirms that the GPT-4o transformation is highly effective at removing demographic bias while preserving meaning. At the same time, it surfaces two remaining issues: only fair image–text alignment and severe class imbalance.

Output Files

The analysis produces the following artifacts:

Directory: quality_reports/
  • summary.json                 - Aggregate metrics in JSON format
  • quality_report.txt           - Human-readable summary report
  • per_example_scores.csv       - Per-sample CLIP, BERT, and bias scores
  • ranked_by_combined.csv       - Samples ranked by combined quality score
  • object_counts.csv            - Object frequency distribution
  • objects_below_50.csv         - Rare / underrepresented objects (≤50 samples)
  • reweighting_probs.csv        - Sampling probabilities for balanced training
  • lorenz_curve.png             - Object distribution inequality visualization
  • top_failures/                - Top failure cases with images and captions

These artifacts make it easy to audit dataset quality, compare runs, and target specific weaknesses.

Key Takeaways

  • Dataset quality cannot be captured by a single metric
  • Bias removal can be measured and verified quantitatively
  • Larger CLIP models are more useful for discrimination, even if absolute scores are lower
  • Alignment-aware training objectives show promise
  • Class imbalance remains a major, and solvable, issue

What Comes Next

None of these improvements are shipping yet. They are preparatory steps that make future work safer and more predictable. With solid metrics in place, the next phase is to train improved models, validate gains rigorously, and continue reducing long-tail failures.

The long-term goal remains unchanged: provide high-quality, privacy-preserving alt text for the large fraction of web images that still lack it, and do so in a way that is fair, transparent, and measurable.

References and Resources

Background

Datasets

Metrics

Code

The Servo BlogNovember in Servo: monthly releases, context menus, parallel CSS parsing, and more!

Landing in Servo 0.0.3 and our November nightly builds, we now have context menus for links, images, and other web content (@atbrakhi, @mrobinson, #40434, #40501), vsync on Android (@mrobinson, #40306), light mode for the new tab page (@arihant2math, #40272), plus several web platform features:

Servo 0.0.3 showing new support for <use> in SVG, <details name>, and context menus

Font variations are now applied in ‘font-weight’ and ‘font-stretch’ (@simonwuelker, #40867), fixing a rendering issue in the Web Engines Hackfest website.

@kkoyung has landed some huge improvements in the SubtleCrypto API, including some of the more modern algorithms in this WICG draft, and a fix for constant-time base64 (@kkoyung, #40334). We now have full support for SHA3-256, SHA3-384, SHA3-512 (@kkoyung, #40765), cSHAKE128, cSHAKE256 (@kkoyung, #40832), Argon2d, Argon2i, Argon2id, ChaCha20-Poly1305, ECDH, ECDSA, and X25519:

Algorithm deriveBits exportKey generateKey importKey sign verify
Argon2d #40936 n/a n/a #40932 n/a n/a
Argon2i #40936 n/a n/a #40932 n/a n/a
Argon2id #40936 n/a n/a #40932 n/a n/a
ChaCha20-Poly1305 n/a #40948 n/a #40948 n/a n/a
ECDH #40333 #40298 #40305 #40253 n/a n/a
ECDSA n/a #40536 #40553 #40523 #40591 #40557
X25519 #40497 #40421 #40480 #40398 n/a n/a

<details> now fires ‘toggle’ events (@lukewarlow, #40271), and <details name> is now exclusive, like radio buttons (@simonwuelker, #40314). InputEvent, which represents ‘input’ and ‘beforeinput’ events, now has composed, data, isComposing, and inputType properties (@excitablesnowball, #39989).

Embedding API

Each webview can now now have its own rendering context (@mrobinson, @mukilan, #40794, #40738, #40721, #40594, #40923). This effectively enables full support for multiple windows, and we’ve started incorporating that into servoshell (@mrobinson, @mukilan, #40883).

Our previously unused context menu API has been replaced with a new, more effective API that includes actions for links, images, and other web content (@mrobinson, @atbrakhi, #40402, #40501, #40607). For more details, see the docs for ContextMenu, EmbedderControl::ContextMenu, and WebViewDelegate::show_embedder_control().

WebView now has can_go_back() and can_go_forward() methods, and servoshell now uses those to disable the back and forward buttons (@mrobinson, #40598).

Having introduced our new RefreshDriver API in October, we’ve now removed Servo::animating() (@mrobinson, #40799) and ServoDelegate::notify_animating_changed() (@mrobinson, #40886), and similarly cleaned up the obsolete and inefficient “animating” state in servoshell (@mrobinson, #40715).

We’ve moved virtually all of the useful items in the Servo API to the root of the servo library crate (@mrobinson, #40951). This is a breaking change, but we expect that it will greatly simplify embedding Servo, and it means you can even write use servo::*; in a pinch.

When running Servo without a custom ClipboardDelegate, we normally use the system clipboard by default. But if there’s no system clipboard, we now have a built-in fallback clipboard (@mrobinson, #40408), rather than having no clipboard at all. Note that the fallback clipboard is very limited, as it can only store text and does not work across processes.

Performance and stability

Servo now parses CSS in parallel with script and layout (@mrobinson, @vimpunk, #40639, #40556), and can now measure Largest Contentful Paint in PerformanceObserver as well as in our internal profiling tools (@shubhamg13, @boluochoufeng, #39714, #39384).

Just-in-time compilation (JIT) is now optional (@jschwe, #37972), which can be useful in situations where generating native code is forbidden by policy or unwanted for other reasons.

We’ve improved the performance of incremental layout (@Loirooriol, @mrobinson, #40795, #40797), touch input (@kongbai1996, #40637), animated GIF rendering (@mrobinson, #40158), the prefs subsystem (@webbeef, #40775), and parseFromString() on DOMParser (@webbeef, #40742). We also use fewer IPC resources when internal profiling features are disabled (@lumiscosity, #40823).

We’ve fixed a bug causing nytimes.com to hang (@jdm, #40811), as well as fixing crashes in Speedometer 3.0 and 3.1 (@Narfinger, #40459), grid layout (@nicoburns, #40821), the fonts subsystem (@simonwuelker, #40913), XPath (@simonwuelker, #40411), ReadableStream (@Taym95, #40911), AudioContext (@Taym95, #40729), and when exiting Servo (@mrobinson, #40933).

Donations

Thanks again for your generous support! We are now receiving 6433 USD/month (+11.8% over October) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.

Servo is also on thanks.dev, and already 28 GitHub users (same as October) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsors: Jenny & Phil Porada, Josh Aas, LambdaTest, and Sandwich! If you’re interested in this kind of sponsorship, please contact us at join@servo.org.

6433 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

Mozilla ThunderbirdVIDEO: Exchange Email Support

Welcome to the last Community Office Hours of 2025! In this edition, Heather and Monica welcome Sr. Software Engineer Brendan Abolivier and Software Engineer Eleanor Dicharry from the Desktop Team. We’re discussing the recent Exchange Web Services Support for email that just landed in Thunderbird Monthly Release 145. Learn how the team landed this feature and discover future plans for Calendar and Contact support, as well as Graph API, in the blog, video, and podcast below.

Community Office Hours will be back in 2026. Thank you so much for joining us for these sneak peeks into how we make, improve, and expand Thunderbird! As always, if you have any ideas for future office hours topics, let us know in the comments!

What is Exchange and why did this take so long?

Exchange is the server-side product that hosts Microsoft’s e-mail, address book, and calendar services. Exchange powers both Microsoft services in the cloud on Microsoft 365 as well as on premises servers run by organizations. 

This is the first protocol we’ve added in over 20 years. We have an older code base that was in survival mode for a long time, and knowing the code well enough to improve on it is a challenge. So we had to understand how everything fit together first. The good news is this entire process will make adding future protocols, like JMAP and Graph, which will ultimately replace Exchange, go much faster.

Signing into Exchange in Thunderbird 145

Right now, only mail is supported. When users add an account, Thunderbird will try to detect if EWS (Exchange Web Services, the API we currently use to interact with Exchange servers) is available. Users can use this in the new Account Hub and in manual account configuration for Microsoft Exchange-hosted accounts. Like IMAP and other server types, users can set which folders are used for trash, spam, sent mail, and other special folders. However, the Exchange API doesn’t store this customization on the server. So these preferences will only apply actions like “move to trash
And “mark as junk” in Thunderbird. 

These limits, thankfully, only apply to folder settings themselves. The server synchronizes all folders and their messages so other clients have up-to-date views of mailboxes managed in Thunderbird.

We’re working on making EWS authentication as complete as possible, and are working with users who are helping us test less usual custom configurations. We have support for on-premises servers (aka ones your organization hosts instead of Microsoft hosting it) not using OAuth2, but this is a goal we’re working towards, along with supporting NTLM. If you have an unusual hosting or authentication option, please check out our Wiki article and get in touch to help us test.

Exchange features

Attachments: Downloading and displaying special and inline attachments should be supported. The team has especially made sure Thunderbird supports detaching and deleting attachments as well. If something doesn’t work, please report it on Bugzilla! 

Message storage: Messages come in two pieces: headers and bodies. It connects and goes through folders in order and pulls down headers, which are easy to download. There is a longer loading process to download message bodies. We’re working on adding folder subscriptions for more control of this process. We do have an option in folders to deselect individual folders from offline storage.

Sync: We made sure messages and folders are kept in sync with the server, so users can move between Thunderbird, other mail clients, and the webview. However, Thunderbird only syncs with the server on a configurable time interval after startup, which is set by default to 10 minutes. You can always use the ‘check new messages’ setting to force an instant sync.

Folder operations: Thunderbird supports all normal folder operations with EWS, except for sharing folders. This is difficult to replicate without a Microsoft-supported API we can use to do this at present. 

Filters: Filters should mostly work, though there are some limits. If you try filtering on a non-standard header isn’t supported, as we sync on a limited set of metadata. While Thunderbird 145 doesn’t support  message body filters, this is in very active development and will be improved in either 146 or 147. Another limit involves interactions between filters and notifications. You will still get notifications if a filter activates for a folder you have set not to notify you. Addressing these limitations is a current area of active development.

Search: Search for EWS accounts will function the same as it does for non-EWS accounts in Thunderbird with Message Search, Advanced Search or Quick Filters. You’ll want to start searches after your messages have downloaded, since search operates locally.

Report Bugs, Make Suggestions, and Help Test

As with everything else in Thunderbird, bug reports, suggestions, and user testing help make things even better. As stated above, if you have a non-standard hosting or authentication option, please join us on Matrix in either the Thunderbird Community Desktop Support or the Thunderbird Desktop Developers channel to learn how to join the testing effort. Test with Daily, if you feel comfortable about using it or Beta, though even testing in Release helps!

If you encounter a bug with an Exchange account, please report it on Bugzilla using the ‘Networking: Exchange’ component of the ‘MailNews: Core’ product. Have a feature you’d like to see? Suggest it at Mozilla Connect.

What about Mobile, Microsoft Graph, or Calendar and Contacts?

While the work the team has done to bring Exchange won’t directly transfer to the Android and iOS apps, it nonetheless gives us an increased familiarity with the protocol. This experience will help us bring Exchange and eventually Graph API to the mobile clients. Speaking of Microsoft Graph, this is our next priority for development. Microsoft is discontinuing support for EWS on Exchange Online accounts next October. Thankfully, work to add Microsoft Graph should go much faster, thanks to the foundational efforts with Exchange. 

This does mean that the team will need to delay their work on adding Calendar and Contacts to email support until Graph is done. Stay tuned to the Thunderbird blog for our monthly development updates and any special reports. 

VIDEO (Also on Peertube):

Resources:

Exchange Mozilla Support Article: https://support.mozilla.org/en-US/kb/thunderbird-and-exchange

Exchange Mozilla Wiki Post (with call for testing): https://wiki.mozilla.org/Thunderbird%3AExchange

Reach out on Matrix: https://matrix.to/#/#thunderbird:mozilla.org 

Bugzilla (use Exchange component for reporting): https://bugzilla.mozilla.org/enter_bug.cgi?product=MailNews%20Core

The post VIDEO: Exchange Email Support appeared first on The Thunderbird Blog.

The Mozilla BlogWhat we learned about choice and control online this year

Earlier this year, we invited you to join us in celebrating online choice, and to take a stand for independence and control in your digital life. It’s a call to action at the heart of our campaign Open What You Want, which celebrates autonomy, defiance, and showing up online exactly as you are, starting with the simple act of choosing your browser. It’s one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online.

We wanted to understand how people think about choice in their everyday lives, how they express it, celebrate it, and fight for it. So we took Firefox on the road to connect with our communities IRL to learn more.

From coffee raves to cosplay: What we learned about choice IRL

We showed up in places where choice is part of the experience — in cities and cultural hubs where creativity, fandom, and freedom of expression thrive. From the Heroes Festival in Freiburg and our House Blend day-rave series in Chicago, Berlin, LA, and Munich, to TwitchCon in San Diego, our Footbrawl tournament in Berlin, and Comic Con in Stuttgart.

Everywhere we went, one thing was clear: people love having real choice in the moments that matter to them — whether it’s picking the coffee blend that powers their day, choosing their cosplay or gaming character, or deciding how they show up online. 

But online, choice and control have slipped from our hands, and often, when it feels like we’re choosing, Big Tech platforms have already decided for us.

Three Firefox event scenes showing a card-game booth, a pink pop browser baddie photo frame, and a drink stand serving colorful beverages.<figcaption class="wp-element-caption">Image credits (left to right): Mondo Robot, Holger Talinski & The Barkers</figcaption>

The reality of online choice today

To unpack this problem, we polled 8,000 adults over 18 years old in France, Germany, the UK and the U.S. on how they navigate choice and control both online and offline.  

The survey, conducted by research agency YouGov, showcases a tension between people’s desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online:

  • Only 16% feel in control of their privacy choices (highest in Germany at 21%)
  • 24% feel it’s “too late” because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%)
  • Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. – 55% and lowest in France – 39%) 

And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people’s ability to choose for themselves — the real problem that fuels these dynamics.

Bar chart comparing US, UK, Germany, and France respondents’ top frustrations with Big Tech, including data tracking, targeted content, AI training, and privacy concern.

Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany).

“At the heart of it, this study showcases why technology should serve humanity first and product design must be built with user agency, choice, and trust at the center,” says Ajit Varma, Product Vice President at Firefox. “When companies embrace this path, they can empower users and cultivate healthy competition that ultimately leads to better products for everyone.” 

We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online.

“The kind of web communities thrive in — open, curious and shaped by its users — is increasingly at odds with the one Big Tech and the billionaires behind it are building. Powerful platforms today try to lock us into ecosystems and decide the products we use online,” says Christina Lang, VP of Global Marketing. “For Firefox, community has always been at the heart of what we do, and we’ll keep fighting to put real choice and control back in people’s hands so the web once again feels like it belongs to the communities that shape it.

And with Open What You Want, we set out to deliver an important message through a series of fun, unconventional experiences: choosing your browser is one of the most important digital decisions you can make.”

For more insights about local country findings from the survey, check our France, Germany, UK and U.S. (including U.S. findings deck) press releases.

Take control of your internet

Download Firefox

The post What we learned about choice and control online this year appeared first on The Mozilla Blog.

Martin ThompsonThe Hacklore Letter and Privacy

Before I start, go and read https://www.hacklore.org/letter.

When it comes to endpoint security, unless you are operating in the “Mossad” threat model[1], this is solid advice. The letter is absolutely right that the advice we used to give people about operational security practices has not aged well.

However, completely rejecting some of the defunct advice might come with privacy costs.

The letter’s authors seem to have given up on online privacy, which disappoints me greatly. Privacy nihilism isn’t really a healthy attitude and it has tainted the advice.

The Good Parts

Let’s discharge the obviously good stuff. Items 1 (Avoid public WiFi), 3 (Never charge devices from public USB ports), 4 (Turn off Bluetooth and NFC), and 6 (Regularly change passwords) are all very bad advice today.

The only reservations I have are minor. The advice on USB devices is true for phones and devices on the smarter end (watches, tablets, e-readers, etc…), where this is true. Less so for peripherals and other USB whatsits[2].

The updated advice on security practices is also pretty good. Updates, multi-factor authentication, and password managers are the best security advice you can give people today[3].

Privacy Nihilism

Unfortunately, privacy is a different story. We exist in a world where – if they could – many companies would collect and analyze everything you do.

In terms of the letter, item 5 (Regularly “clear cookies”) is basically pure nihilism. The implication is that you can be tracked no matter what you do.

I don’t subscribe to that perspective. Fingerprinting is pretty effective, but not as good as this implies. Not everyone is uniquely identifiable through their fingerprint. Also, browsers are making meaningful progress at making fingerprints less useful for many people.

You do have to stop giving websites your email and phone number though. It’s absolutely true that sites are using that. Use temporary email addresses when you can[4].

That said, I don’t clear cookies. The resulting inconvenience is just not worth it. There is absolutely no security advantage from purging cookies. Instead, I recommend targeted use of private browsing modes, profiles, or containers.

Scanning QR Codes is Following a Link

Item 2 in the letter is “Never scan QR codes”. The claim is that this is bad advice.

Security-wise, this is mostly true. Sticker attacks[5] are probably the main reason that the security situation is not perfect. But that’s because of a more general phishing problem[6].

From a pure security perspective, the letter is absolutely correct. Opening any link in a browser is so overwhelmingly likely to be fine that it’s not worth worrying about. You won’t get pwned by even the most malicious link.

Browser security has gotten pretty good lately. Browsers aren’t 100% there, but you should not worry about the gap unless you are someone who operates in that “Mossad” threat model.

It’s also a bit worse if an app – rather than your browser – handles the link[7]. Either way, the risks to security are pretty remote. I don’t worry about getting poisoned by the food I buy at the supermarket; in the same way, you should not worry about following links.

The phishing problem is that you really need to trust whatever provides you with a link if you are going to enter information at the other end[6:1]. Otherwise, they could send you to some place that will steal your information[8]. That is the case though, no matter where you find the link.

Scanning QR Codes is Not Great for Privacy

Privacy-wise, QR codes are not as straightforward as this makes out. If you care about privacy, sadly the old advice holds some wisdom.

The privacy risk for QR codes is related to navigation tracking. If scanning a QR code is just following a link, following links in any context comes with a privacy cost[9].

There are small differences between links in QR codes, email[10], or on ordinary websites, but there’s one common factor: the site that you go to can learn everything about the place you found the link[11] and add that to your profile.

Every time you follow a link you are adding to the information that the destination website (or app) has about your activities.

QR codes are generally only placed in one physical location, so visiting the site almost always means that you are at that location.

That is, unlike links you find online, following a QR code can take information about where you are physically located and adds that to tracking databases.

Take the QR codes that restaurants use for menus and ordering. Many restaurants outsource all the online stuff to external services. This is fair, restaurants would probably much rather focus on making and selling food, which is more than difficult enough.

Outsourcing means that there’s a good chance that you will end up on the same site as you visit different restaurants. That website now has a log of the places you visited, including details of when you visited, what you ate, the size of the bill, and whatever else the restaurant shares with them about you. You can almost guarantee that the information they collect is for sale, unless the terms and conditions promise otherwise[13].

Avoiding QR Code Tracking

So if you would rather not help people build profiles about you every time you scan a QR code, what can you do?

Personally, I only open QR codes in a private browsing window. That way, at least the tracking sites can’t use cookies to connect your QR code into a single profile. They just get isolated visits from what might be different people.

To help with that, you can maybe set your default browser to one that doesn’t keep cookies, like Firefox Focus, DuckDuckGo’s Browser, or any browser that you set up to not keep cookies.

Products could be better in this regard. As far as I’m aware, you can’t set a different browser for QR codes on most devices[14]. For my sins, I use an iPhone[15]. Firefox iOS used to have a QR code scanning button, which made it easy to switch to private browsing and open those links in a cookie- and tracking-free tab. A recent change made scanning QR codes much more annoying[16], so I’m still looking for a better option there.

In the end, it’s easy to see why the authors of the letter have adopted a nihilistic attitude toward privacy. Personally, I don’t accept that outcome, even if it means a little more work on my part.


  1. If you are, you know already. ↩︎

  2. Those devices can be vulnerable in ways your phone isn’t. Some will allow firmware to be updated by anything they attach to. That means they will become a risk to any machine that they are subsequently plugged in to. ↩︎

  3. I will take the opportunity to quibble about the way they present their advice on passphrases. My advice is to let your password manager suggest a high entropy password and only use passwords for those things that separate you from your password manager. That’s usually just operating system login and unlocking the password manager. Given how few of these passwords are likely needed, suggesting passphrases over strong passwords seems largely academic. The usability difference between a passphrase and a strong password is tiny; the passphrase might be more memorable, but the password might be quicker to type. ↩︎

  4. Firefox Relay, iCloud Hide My Email, and Fastmail Email Aliases are examples I’m aware of, but many mail providers have similar features. ↩︎

  5. This is where an original QR code is covered with a sticker directing someone to a different site. A QR code on a parking meter for payments is a great example. An attacker can collect parking payments – at inflated prices – for a while before the attack is noticed. ↩︎ ↩︎

  6. The golden rule of the web is: If you are going to enter information into a site, especially when money is involved, type its address in to get to the site[8:1]. ↩︎ ↩︎ ↩︎

  7. Links can also target any app that registers interest in handling URIs. A little more so on phones than desktop computers. Apps generally aren’t as well hardened against attack as browsers, but they are also generally easier to defend, because they have less functionality. The best advice I can give there is to be careful about what apps you install. I liken visiting a web site as a casual encounter, installing an app is much more personal. Either way, the extent to which you are exposed to infection increases with intimacy. ↩︎

  8. Passwords especially. You should never type passwords into a website. That is what a password manager is for. You should only type passwords to get to your password manager. ↩︎ ↩︎

  9. Yes, this is a straight cost, not a risk. There’s no probability involved. ↩︎

  10. There is a very different reason not to click on links in email[12]. A scammer might attempt to convince you that they are someone you trust and get you to send them something you might regret. Like your banking password or money. This is much like the QR code sticker attack[5:1], except that the attacker only has to send you mail that passes mail filters and looks legit. ↩︎

  11. On the web, the place that shows you a link also learns that you clicked it. This is not true for email and QR codes, but that makes very little difference privacy-wise. ↩︎

  12. Clicking on a link in email isn’t always a bad idea. Clicking the link lets the site know that you received their message. That’s the whole point of emails asking you to confirm that you own an email address, so go ahead and click those. Just make sure to close the tab immediately. At least before you put any other information into the site[6:2]. ↩︎

  13. Not like you could have read terms and conditions before scanning the QR code. Or that anyone has time to read them. ↩︎

  14. I’d love to know if there are any operating systems that let you set a different app for QR code links, that seems like it would be a useful feature. ↩︎

  15. The 13 mini is still the only phone in a reasonable form factor that is still relatively current. All other phones are too big. It’s a shame that most web experiences a) run on Safari and b) awful. The latter being the fault of sites, not so much the device. ↩︎

  16. OK, here goes: Unlock your phone, go to the home screen. Open Firefox, go to the tabs view, hit the Private option, open a new tab. Switch to the camera, scan the code, tab the option to open the link. You need to open the tab, because Firefox will use the browsing mode that was last used. ↩︎

The Rust Programming Language BlogAnnouncing Rust 1.92.0

The Rust team is happy to announce a new version of Rust, 1.92.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.92.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.92.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.92.0 stable

Deny-by-default never type lints

The language and compiler teams continue to work on stabilization of the never type. In this release the never_type_fallback_flowing_into_unsafe and dependency_on_unit_never_type_fallback future compatibility lints were made deny-by-default, meaning they will cause a compilation error when detected.

It's worth noting that while this can result in compilation errors, it is still a lint; these lints can all be #[allow]ed. These lints also will only fire when building the affected crates directly, not when they are built as dependencies (though a warning will be reported by Cargo in such cases).

These lints detect code which is likely to be broken by the never type stabilization. It is highly advised to fix them if they are reported in your crate graph.

We believe there to be approximately 500 crates affected by this lint. Despite that, we believe this to be acceptable, as lints are not a breaking change and it will allow for stabilizing the never type in the future. For more in-depth justification, see the Language Team's assessment.

unused_must_use no longer warns about Result<(), UninhabitedType>

Rust's unused_must_use lint warns when ignoring the return value of a function, if the function or its return type is annotated with #[must_use]. For instance, this warns if ignoring a return type of Result, to remind you to use ?, or something like .expect("...").

However, some functions return Result, but the error type they use is not actually "inhabited", meaning you cannot construct any values of that type (e.g. the ! or Infallible types).

The unused_must_use lint now no longer warns on Result<(), UninhabitedType>, or on ControlFlow<UninhabitedType, ()>. For instance, it will not warn on Result<(), Infallible>. This avoids having to check for an error that can never happen.

use core::convert::Infallible;
fn can_never_fail() -> Result<(), Infallible> {
    // ...
    Ok(())
}

fn main() {
    can_never_fail();
}

This is particularly useful with the common pattern of a trait with an associated error type, where the error type may sometimes be infallible:

trait UsesAssocErrorType {
    type Error;
    fn method(&self) -> Result<(), Self::Error>;
}

struct CannotFail;
impl UsesAssocErrorType for CannotFail {
    type Error = core::convert::Infallible;
    fn method(&self) -> Result<(), Self::Error> {
        Ok(())
    }
}

struct CanFail;
impl UsesAssocErrorType for CanFail {
    type Error = std::io::Error;
    fn method(&self) -> Result<(), Self::Error> {
        Err(std::io::Error::other("something went wrong"))
    }
}

fn main() {
    CannotFail.method(); // No warning
    CanFail.method(); // Warning: unused `Result` that must be used
}

Emit unwind tables even when -Cpanic=abort is enabled on linux

Backtraces with -Cpanic=abort previously worked in Rust 1.22 but were broken in Rust 1.23, as we stopped emitting unwind tables with -Cpanic=abort. In Rust 1.45 a workaround in the form of -Cforce-unwind-tables=yes was stabilized.

In Rust 1.92 unwind tables will be emitted by default even when -Cpanic=abort is specified, allowing for backtraces to work properly. If unwind tables are not desired then users should use -Cforce-unwind-tables=no to explicitly disable them being emitted.

Validate input to #[macro_export]

Over the past few releases, many changes were made to the way built-in attributes are processed in the compiler. This should greatly improve the error messages and warnings Rust gives for built-in attributes and especially make these diagnostics more consistent among all of the over 100 built-in attributes.

To give a small example, in this release specifically, Rust became stricter in checking what arguments are allowed to macro_export by upgrading that check to a "deny-by-default lint" that will be reported in dependencies.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.92.0

Many people came together to create Rust 1.92.0. We couldn't have done it without all of you. Thanks!

This Week In RustThis Week in Rust 629

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is mdbook-lint, a markdown linter geared towards mdbook, but useful with any markdown.

Thanks to josh rotenberg for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

494 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Overall result is negative this week, but both main regressions are on track to be addressed. No outstanding changes otherwise.

Triage done by @panstromek. Revision range: eca9d93f..55495234

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.1%, 4.3%] 111
Regressions ❌
(secondary)
0.4% [0.1%, 2.2%] 97
Improvements ✅
(primary)
-1.0% [-1.3%, -0.7%] 2
Improvements ✅
(secondary)
-0.2% [-0.3%, -0.0%] 9
All ❌✅ (primary) 0.4% [-1.3%, 4.3%] 113

3 Regressions, 2 Improvements, 3 Mixed; 3 of them in rollups 30 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs entered Final Comment Period this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only) Rust RFCs

No Items entered Final Comment Period this week for Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-12-10 - 2026-01-07 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

[..] if a breaking change is going to happen, it’s much better to make lock automatically panic than to make panics silently unlock.

Rain on their blog

Thanks to hkBst for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Add-on Reviews2025 Staff Pick Add-ons

While nearly half of all Firefox users have installed an add-on, it’s safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…

Falling Snow Animated Theme

Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser. 

Privacy Badger

The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you. 

Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage “supercookies,” canvas fingerprinting, and other sneaky tracking methods. 

Adaptive Tab Bar Color

Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you’re visiting.

It’s beautifully simple and sublime. No setup required, but you’re free to make subtle adjustments to color contrast patterns and assign specific colors for websites. 

Rainy Spring Sakura by MaDonna

Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura’s bucolic mix of calming colors. 

It’s like instant Zen mode for Firefox. 

Return YouTube Dislike

Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.

Other Firefox users seem to agree…

“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”

Firefox user OFG

“i have never smashed 5 stars faster.”

Firefox user 12918016

<figcaption class="wp-element-caption">Return YouTube Dislike re-enables a beloved feature.</figcaption>

LeechBlock NG

Block time-wasting websites with LeechBlock NG — easily one of our staff-favorite productivity tools.

Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features: 

  • Block entire websites or just portions (e.g. allow YouTube video pages but block the homepage)
  • Block websites based on time of day, day of the week, or both
  • Time limit customization (e.g. only 1 hour of Reddit per day)

DarkSpaceBlue

Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.

LangaugeTool – Grammar and Spell Checker

Improve your prose anywhere you write on the web. LanguageTool – Grammar and Spell Checker will make you a better writer in 25+ languages. 

Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:

  • Offers alternate phrasing for brevity and clarity
  • Recognizes common misuses of similar sounding words (e.g. there/their, your/you’re)
  • Works with all web-based email and social media
  • Provides synonyms for overused words
<figcaption class="wp-element-caption">LanguageTool can help with subtle syntax improvements. </figcaption>

Sink It for Reddit!

Imagine a more focused and free feeling Reddit — that’s Sink It for Reddit!

Some of our staff-favorite features include:

  • Custom content muting (e.g. ad blocking, remove app install and login prompts)
  • Color-coded comments
  • Streamlined navigation
  • Adaptive dark mode

Sushi Nori

Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori

Mozilla ThunderbirdThunderbird Send Security Audit with OSTIF and 7ASecurity

As we get ready for the Thunderbird Pro launch, we want every service we offer to be secure and worthy of the trust our community places in us. That means being honest about where we stand today and the work we are doing to meet the promises we are making.

Recently we partnered with OSTIF, the Open Source Technology Improvement Fund, and 7ASecurity to perform a full security audit of Thunderbird Send. As previously introduced, Send is an end-to-end encrypted large file sharing service that will be part of the overall Thunderbird Pro subscription suite coming in 2026. It is built on the foundation of the original Firefox Send project, although much has changed since those days.

While the audit focused on Send, the 7ASecurity team also reviewed parts of our shared infrastructure. That extra visibility resulted in meaningful hardening improvements across all of our products.

This was a whitebox audit, which means the auditors had full access to our systems and source code. They reviewed both the client and server sides of the service. They also carried out supply chain analysis, where they examined how our dependencies are managed, and threat modelling, which helps identify how attackers might approach a system even if there is no known exploit today.

The Thunderbird team has already addressed most of the items in the report, including all critical vulnerabilities. This also includes almost all non-critical hardening recommendations.  A few require more time because they relate to the organization of our broader infrastructure. For example, all Thunderbird Pro services currently run under a single AWS account. This is fairly normal in the early stages of building a platform. As the services mature and become more distinct, we will split them into separate accounts for stronger isolation.

The audit highlighted two vulnerabilities. One was critical and one was high. There were also twenty recommendations for further strengthening and improvement. One of the issues involved an API endpoint that had the potential to expose some user data without requiring authentication and another issue created the possibility of a denial of service attack. While neither issue actually happened,  the conditions that made it possible needed to be removed. Both of these were addressed and fixed in April.

The auditors also noted theoretical paths that could lead to privilege escalation, where attackers use one part of a system to gain more access than intended. This does not mean a privilege escalation exists today, but that some patterns could be tightened to prevent them in the future. These concerns apply only to older infrastructure, such as where we were running Appointment Beta. Once we migrate these users from appointment.day to the new appointment.tb.pro, we will retire the older systems entirely.

Another recommendation involves adding build attestations. These allow anyone to verify that a software build came from us and has not been tampered with. This is something we plan to implement in 2026.

Not everything in the report was a list of problems. In fact, the auditors highlighted several positive aspects of the collaboration. Their notes describe a team that was prepared and organized from the beginning, which allowed the audit work to begin without delays. Communication was smooth through email and a shared Element channel. The Send engineering team was consistently helpful and responsive, providing access and information whenever needed. The auditors also appreciated that we gave them full staging visibility, documentation, test accounts and source code. Their updates throughout the process were structured and consistent. The final report even comments on the clarity of the project as a whole, which helped them form a well informed view of our security posture.

The report closes with detailed guidance and commentary, but it also reflects confidence that Thunderbird is taking the right approach to security. That is exactly why we welcome third party audits. Open source only works when everyone can see the work, question it and verify it. Thunderbird Pro will follow those same values as it develops into a complete ecosystem of secure, privacy respecting services.

We will continue improving Send and the rest of our Pro services, and we look forward to sharing more as we get closer to launch. Thank you for being part of this journey and for pushing us to build something stronger.

The full report can be found here.

The post Thunderbird Send Security Audit with OSTIF and 7ASecurity appeared first on The Thunderbird Blog.

Data@MozillaIncident Report: A compiler bug and JSON

It all started rather inconspicuous: The Data Engineering team filed a bug report about a sudden increase in schema errors at ingestion of telemetry data from Firefox for Android. At that point in time about 0.9% of all incoming pings were not passing our schema validation checks.

The data we were seeing was surprising. Our ingestion endpoint received valid JSON that contained snippets like this:

{
    "metrics": {
        "schema: counter": {
            "glean.validation.pings_submitted": {
                "events": 1
            }
        },
        ...
    },
    ...
}

What we would expect and would pass our schema validation is this:

{
    "metrics": {
        "labeled_counter": {
            "glean.validation.pings_submitted": {
                "events": 1
            }
        },
        ...
    },
    ...
}

The difference? 8 characters:

-        "schema: counter": {
+        "labeled_counter": {

8 different characters that still make up valid JSON, but break validation.

A week later the number of errors kept increasing, affecting up to 2% of all ingested pings from Firefox for Android Beta. That’s worryingly high. That’s enough to drop other work and call an incident.

Aside: Telemetry ingestion

In Firefox the data is collected using the Glean SDK. Data is stored in a local database and eventually assembled into what we call a ping: A bundle of related metrics, gathered in a JSON payload to be transmitted. This JSON document is then POSTed to the Telemetry edge server. From there the decoder eventually picks it up and processes it further. One of the early things it does is verify the received data against one of the pre-defined schemas. When data is coming from the Glean SDK it must pass the pre-defined glean.1.schema.json. This essentially describes which fields to expect in the nested JSON object. One thing it is expecting is a labeled_counter A thing it is NOT expecting is schema: counter. In fact
keys other than the listed ones are forbidden.

The missing schema:_

The data we were receiving from a growing number of clients contained 8 bytes that we didn’t expect in that place: schema: . That 8-character string didn’t even show up in the Glean SDK source code. Where does it come from? Why was it showing up now?

We did receive entirely valid JSON, so it’s unlikely to be simple memory corruption1. More like memory confusion, if that’s a thing.

We know where the payload is constructed. The nested object for labeled metrics is constructed in its own function. It starts with string formatting:

let ping_section = format!("labeled_{}", metric.ping_section());

There’s our 8-character string labeled_ that gets swapped. The Glean SDK is embedded into Firefox inside mozilla-central and compiled with all the other code together. A single candidate for the schema: string exists in that codebase. That’s another clue it could be memory confusion.

My schema? Confused.

I don’t know much about how string formatting in Rust works under the hood, but luckily Mara blogged about it 2 years ago: Behind the Scenes of Rust String Formatting: format_args!() (and then recently improved the implementation2).

So the format! from above expands into something like this:

std::io::_format(
    // Simplified expansion of format_args!():
    std::fmt::Arguments {
        template: &[Str("labeled_ "), Arg(0)],
        arguments: &[&metric.ping_section() as &dyn Display],
    }
);

Another clue that the labeled_ string is referenced all by itself and swapping out the pointer to it would be enough to lead to the corrupted data we were seeing.

Architecturing more clues

Whenever we’re faced with data anomalies we start by dissecting the data to figure out if the anomalies are from a particular subset of clients. The hope is that identifying the subset of clients where it happens gives us more clues about the bug itself.

After initially focusing too much on actual devices colleagues helpfully pointed out that the actual split was the device’s architecture3:

Data since 2025-11-11 showing a sharp increase in errors for armeabi-v7a clients

Data since 2025-11-11 showing a sharp increase in errors for armeabi-v7a clients

ARMv8, the 64-bit architecture, did not run into this issue4. ARMv7, purely 32-bit, was the sole driver of this data anomaly. Another clue that something in the code specifically for this architecture was causing this.

Logically unchanged

With a hypothesis what was happening, but no definite answer why, we went to speculative engineering: Let’s avoid the code path that we think is problematic.

By explicitly listing out the different strings we want to have in the JSON payload we avoid the formatting and thus hopefully any memory confusion.

let ping_section = match metric.ping_section() {
    "boolean" => "labeled_boolean".to_string(),
    "counter" => "labeled_counter".to_string(),
    // <snip>
    _ => format!("labeled_{}", metric.ping_section()),
};

This was implemented in 912fc80 and shipped in Glean v66.1.2. It landed in Firefox the same day of the SDK release and made it to Firefox for Android Beta the Friday after. The data shows: It’s working, no more memory confusion!

The number of errors have been on a downturn ever since the fix landed on 2025-11-26

The number of errors have been on a downturn ever since the fix landed on 2025-11-26

A bug gone but still there

The immediate incident-causing data anomaly was mitigated, the bug is not making it to the Firefox 146 release.

But we still didn’t know why this was happening in the first place. My colleagues Yannis and Serge kept working and searching and were finally able to track down what exactly is happening in the code. The bug contains more information on the investigation.

While I was trying to read and understand the disassembly of the broken builds, they went ahead and wrote a tiny emulator (based on the Unicorn engine) that runs just enough of the code to find the offending code path5.

> python ./emulator.py libxul.so
Path: libxul.so
GNU build id: 1b9e9c8f439b649244c7b3acf649d1f33200f441
Symbol server ID: 8F9C9E1B9B43926444C7B3ACF649D1F30
Please wait, downloading symbols from: https://symbols.mozilla.org/try/libxul.so/8F9C9E1B9B43926444C7B3ACF649D1F30/libxul.so.sym
Please wait, uncompressing symbols...
Please wait, processing symbols...
Proceeding to emulation.
Result of emulation: bytearray(b'schema: ')
This is a BAD build.

The relevant section of the code boils down to this:

ldr   r3, [pc, #0x20c]
add   r3, pc
strd  r3, r0, [sp, #0xd0]
add   r1, sp, #0xd0
bl    alloc::fmt::format_inner

The first two instructions build the pointer to the slice in r3, by using a pc-relative offset found in a nearby constant. Then we store that pointer at sp+0xd0, and we put the address sp+0xd0 into r1. So before we reach alloc::fmt::format_inner, r1 points to a stack location that contains a pointer to the slice of interest. The slice lives in .data.rel.ro and contains a pointer to the string, and the length of the string (8). The string itself lives in .rodata.

In good builds the .rodata r3 points to looks like this:

0x06f0c3d4: 0x005dac18  -->  "labeled_"
0x06f0c3d8:        0x8
0x06f0c3dc: 0x0185d707  -->  "/builds/<snip>/rust/glean-core/src/storage/mod.rs"
0x06f0c3e0:       0x4d

In bad builds however it points to something that has our dreaded schema: string:

0x06d651c8: 0x010aa2e8  -->  "schema: "
0x06d651cc:        0x8
0x06d651d0: 0x01a869a7  -->  "maintenance: "
0x06d651d4:        0xd
0x06d651d8: 0x01a869b4  -->  "storage dir: "
0x06d651dc:        0xd
0x06d651e0: 0x01a869c8  -->  "from variant of type "
0x06d651e4:       0x15
0x06d651e8: 0x017f793c  -->  ": "
0x06d651ec:        0x2

This confirms the suspicion that it’s a compiler/linker bug. Now the question was how to fix that.

Firefox builds with a variety of Clang/LLVM versions. Mozilla uses its own build of LLVM and Clang to build the final applications, the exact version used is updated as soon as possible, but never on release. Sometimes additional patches are applied on top of the Clang release, like some backports fixing other compiler bugs.

After identifying that this is indeed a bug in the linker and that it has already been patched in later LLVM versions, Serge did all the work to bisect the LLVM release to find which patches to apply to Mozilla’s own Clang build. Ultimately he tracked it down to these two patches for LLVM:

With those patches applied, the old code, without our small code rearrangement, does not lead to broken builds anymore.

With the Glean code patched, the ingestion errors dropping and the certainty that we have identified and patched the compiler bug, we can safely ship the next release of Firefox (for Android).

Collaboration

Incidents are stressful situations, but a great place for collaboration across the whole company. The number of people involved in resolving this is long.

Thanks to Eduardo & Ben from Data Engineering for raising the issue.
Thanks to Alessio (my manager) for managing the incident.
Thanks to chutten and Travis (from my team) for brainstorming what caused this and suggesting solutions/workarounds.
Thanks to Donal (Release Management) for fast-tracking the mitigation into a Beta release.
Thanks to Alex (Release Engineering) for some initial investigation into the linker bug.
Thanks to Brad (Data Science) for handling the data analysis side.
Thanks to Yannis and Serge (OS integration) for identifying, finding and patching the linker bug.


Footnotes:

  1. Memory corruption is never “simple”. But if it were memory corruption I would expect data to be broken worse or in other places too. Not just a string swap in a single place.↩︎
  2. That improvement is not yet available to us. The application experiencing the issue was compiled using Rust 1.86.0.↩︎
  3. Our checklist initially omitted architecture. A mistake we since fixed.↩︎
  4. Apparently we do see some errors, but they are so infrequent that we can ignore them for now.↩︎
  5. Later Yannis wrote a script that can identify broken builds purely much quicker, just by searching for the right string patterns.↩︎

Firefox Developer ExperienceFirefox WebDriver Newsletter 146

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 146 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 146 Khalid AlHaddad, who had already contributed to the previous release, submitted two new bug fixes:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Marionette

Jan-Erik RedigerIncident Report: A compiler bug and JSON


This article is cross-posted to the Data@Mozilla blog.


It all started rather inconspicuous: The Data Engineering team filed a bug report about a sudden increase in schema errors at ingestion of telemetry data from Firefox for Android. At that point in time about 0.9% of all incoming pings were not passing our schema validation checks.

The data we were seeing was surprising. Our ingestion endpoint received valid JSON that contained snippets like this:

{
    "metrics": {
        "schema: counter": {
            "glean.validation.pings_submitted": {
                "events": 1
            }
        },
        ...
    },
    ...
}

What we would expect and would pass our schema validation is this:

{
    "metrics": {
        "labeled_counter": {
            "glean.validation.pings_submitted": {
                "events": 1
            }
        },
        ...
    },
    ...
}

The difference? 8 characters:

-        "schema: counter": {
+        "labeled_counter": {

8 different characters that still make up valid JSON, but break validation.

A week later the number of errors kept increasing, affecting up to 2% of all ingested pings from Firefox for Android Beta. That's worryingly high. That's enough to drop other work and call an incident.

Aside: Telemetry ingestion

In Firefox the data is collected using the Glean SDK. Data is stored in a local database and eventually assembled into what we call a ping: A bundle of related metrics, gathered in a JSON payload to be transmitted. This JSON document is then POSTed to the Telemetry edge server. From there the decoder eventually picks it up and processes it further. One of the early things it does is verify the received data against one of the pre-defined schemas. When data is coming from the Glean SDK it must pass the pre-defined glean.1.schema.json. This essentially describes which fields to expect in the nested JSON object. One thing it is expecting is a labeled_counter A thing it is NOT expecting is schema: counter. In fact keys other than the listed ones are forbidden.

The missing schema:_

The data we were receiving from a growing number of clients contained 8 bytes that we didn't expect in that place: schema: . That 8-character string didn't even show up in the Glean SDK source code. Where does it come from? Why was it showing up now?

We did receive entirely valid JSON, so it's unlikely to be simple memory corruption1. More like memory confusion, if that's a thing.

We know where the payload is constructed. The nested object for labeled metrics is constructed in its own function. It starts with string formatting:

let ping_section = format!("labeled_{}", metric.ping_section());

There's our 8-character string labeled_ that gets swapped. The Glean SDK is embedded into Firefox inside mozilla-central and compiled with all the other code together. A single candidate for the schema: string exists in that codebase. That's another clue it could be memory confusion.

My schema? Confused.

I don't know much about how string formatting in Rust works under the hood, but luckily Mara blogged about it 2 years ago: Behind the Scenes of Rust String Formatting: format_args!() (and then recently improved the implementation2).

So the format! from above expands into something like this:

std::io::_format(
    // Simplified expansion of format_args!():
    std::fmt::Arguments {
        template: &[Str("labeled_ "), Arg(0)],
        arguments: &[&metric.ping_section() as &dyn Display],
    }
);

Another clue that the labeled_ string is referenced all by itself and swapping out the pointer to it would be enough to lead to the corrupted data we were seeing.

Architecturing more clues

Whenever we're faced with data anomalies we start by dissecting the data to figure out if the anomalies are from a particular subset of clients. The hope is that identifying the subset of clients where it happens gives us more clues about the bug itself.

After initially focusing too much on actual devices colleagues helpfully pointed out that the actual split was the device's architecture3:

Data since 2025-11-11 showing a sharp increase in errors for armeabi-v7a clients

ARMv8, the 64-bit architecture, did not run into this issue4. ARMv7, purely 32-bit, was the sole driver of this data anomaly. Another clue that something in the code specifically for this architecture was causing this.

Logically unchanged

With a hypothesis what was happening, but no definite answer why, we went to speculative engineering: Let's avoid the code path that we think is problematic.

By explicitly listing out the different strings we want to have in the JSON payload we avoid the formatting and thus hopefully any memory confusion.

let ping_section = match metric.ping_section() {
    "boolean" => "labeled_boolean".to_string(),
    "counter" => "labeled_counter".to_string(),
    // <snip>
    _ => format!("labeled_{}", metric.ping_section()),
};

This was implemented in 912fc80 and shipped in Glean v66.1.2. It landed in Firefox the same day of the SDK release and made it to Firefox for Android Beta the Friday after. The data shows: It's working, no more memory confusion!

The number of errors have been on a downturn ever since the fix landed on 2025-11-26

A bug gone but still there

The immediate incident-causing data anomaly was mitigated, the bug is not making it to the Firefox 146 release.

But we still didn't know why this was happening in the first place. My colleagues Yannis and Serge kept working and searching and were finally able to track down what exactly is happening in the code. The bug contains more information on the investigation.

While I was trying to read and understand the disassembly of the broken builds, they went ahead and wrote a tiny emulator (based on the Unicorn engine) that runs just enough of the code to find the offending code path5.

> python ./emulator.py libxul.so
Path: libxul.so
GNU build id: 1b9e9c8f439b649244c7b3acf649d1f33200f441
Symbol server ID: 8F9C9E1B9B43926444C7B3ACF649D1F30
Please wait, downloading symbols from: https://symbols.mozilla.org/try/libxul.so/8F9C9E1B9B43926444C7B3ACF649D1F30/libxul.so.sym
Please wait, uncompressing symbols...
Please wait, processing symbols...
Proceeding to emulation.
Result of emulation: bytearray(b'schema: ')
This is a BAD build.

The relevant section of the code boils down to this:

ldr   r3, [pc, #0x20c]
add   r3, pc
strd  r3, r0, [sp, #0xd0]
add   r1, sp, #0xd0
bl    alloc::fmt::format_inner

The first two instructions build the pointer to the slice in r3, by using a pc-relative offset found in a nearby constant. Then we store that pointer at sp+0xd0, and we put the address sp+0xd0 into r1. So before we reach alloc::fmt::format_inner, r1 points to a stack location that contains a pointer to the slice of interest. The slice lives in .data.rel.ro and contains a pointer to the string, and the length of the string (8). The string itself lives in .rodata.

In good builds the .rodata r3 points to looks like this:

0x06f0c3d4: 0x005dac18  -->  "labeled_"
0x06f0c3d8:        0x8
0x06f0c3dc: 0x0185d707  -->  "/builds/<snip>/rust/glean-core/src/storage/mod.rs"
0x06f0c3e0:       0x4d

In bad builds however it points to something that has our dreaded schema: string:

0x06d651c8: 0x010aa2e8  -->  "schema: "
0x06d651cc:        0x8
0x06d651d0: 0x01a869a7  -->  "maintenance: "
0x06d651d4:        0xd
0x06d651d8: 0x01a869b4  -->  "storage dir: "
0x06d651dc:        0xd
0x06d651e0: 0x01a869c8  -->  "from variant of type "
0x06d651e4:       0x15
0x06d651e8: 0x017f793c  -->  ": "
0x06d651ec:        0x2

This confirms the suspicion that it's a compiler/linker bug. Now the question was how to fix that.

Firefox builds with a variety of Clang/LLVM versions. Mozilla uses its own build of LLVM and Clang to build the final applications, the exact version used is updated as soon as possible, but never on release. Sometimes additional patches are applied on top of the Clang release, like some backports fixing other compiler bugs.

After identifying that this is indeed a bug in the linker and that it has already been patched in later LLVM versions, Serge did all the work to bisect the LLVM release to find which patches to apply to Mozilla's own Clang build. Ultimately he tracked it down to these two patches for LLVM:

With those patches applied, the old code, without our small code rearrangement, does not lead to broken builds anymore.

With the Glean code patched, the ingestion errors dropping and the certainty that we have identified and patched the compiler bug, we can safely ship the next release of Firefox (for Android).

Collaboration

Incidents are stressful situations, but a great place for collaboration across the whole company. The number of people involved in resolving this is long.

Thanks to Eduardo & Ben from Data Engineering for raising the issue.
Thanks to Alessio (my manager) for managing the incident.
Thanks to chutten and Travis (from my team) for brainstorming what caused this and suggesting solutions/workarounds.
Thanks to Donal (Release Management) for fast-tracking the mitigation into a Beta release.
Thanks to Alex (Release Engineering) for some initial investigation into the linker bug.
Thanks to Brad (Data Science) for handling the data analysis side.
Thanks to Yannis and Serge (OS integration) for identifying, finding and patching the linker bug.


Footnotes:

  1. Memory corruption is never "simple". But if it were memory corruption I would expect data to be broken worse or in other places too. Not just a string swap in a single place.

  2. That improvement is not yet available to us. The application experiencing the issue was compiled using Rust 1.86.0.

  3. Our checklist initially omitted architecture. A mistake we since fixed.

  4. Apparently we do see some errors, but they are so infrequent that we can ignore them for now.

  5. Later Yannis wrote a script that can identify broken builds purely much quicker, just by searching for the right string patterns.

The Mozilla BlogTranslate the web your way, plus choose the Firefox icon that suits your vibe

Whenever you open Firefox, we want it to feel like it speaks your language and matches your style. This month, our mobile team is rolling out features inspired by community ideas, user requests and the small everyday moments that make browsing more delightful.

Today we’re bringing three of those ideas to life: on-device translations for iOS, choose your icon for Android and a new search option powered by Perplexity.

Translate the web now on iOS

If you’ve ever tapped a link only to land on a page in a language you don’t read, you know how quickly curiosity can turn into friction. Until now, iOS users didn’t have a built-in way to translate those pages privately within Firefox. That changes today.

We started rolling out translations last week in German, French and Japanese. This week we added Spanish and Portuguese and will roll out more languages soon. 

What’s special about this launch is that it’s not just another translation tool; it’s built on years of Mozilla research and designed to work entirely on your device.

Most browsers send your page content to the cloud before translating it. Firefox takes a different path: Everything happens locally, directly on your phone.

That means:

  • Your content never leaves your device.
  • Nothing is logged or stored.
  • And once the language model is downloaded, translations even work offline.

Building translations this way isn’t easy. Mobile devices have limited memory and battery, so our engineers designed smarter algorithms that translate only what you actually need to read — not the entire page at once.

It’s a small example of something much bigger: Mozilla’s commitment to building features that respect your privacy by default, not as an afterthought.

How it works

When Firefox detects that a page is in a different language from your device settings, it shows the translation icon in the toolbar and will translate to the  language that is set on the device.

<figcaption class="wp-element-caption">Translation icon appears in the toolbar</figcaption>

Customize your Firefox icon – now on Android too

When we released choose your icon on iOS earlier this year, it quickly became one of the most charming ways to personalize Firefox. People loved picking the version that matched their mood, whether  bold, classic or a little bit whimsical.

Now that same experience comes to Android.

Personalize your home screen

On Android, head to: Settings → Customize → App Icon. From there, you can browse a lineup of Firefox styles, including Momo, the warm, joyful fox hugging the Earth. What makes Momo special isn’t just the art itself, but the story behind it.

<figcaption class="wp-element-caption">Momo is just one of the many icons to choose from in Firefox mobile</figcaption>

Momo was originally a five-minute doodle by Dutch illustrator Ruud Hendriks (@heyheymomodraws). Its playful energy immediately resonated with the Firefox team, who saw in it a spark of nostalgia that echoed Firefox’s early logo. Today, that doodle has become the first community-created Firefox app icon.

Ruud’s artwork reminds us  that some of the most delightful product features start as small, genuine ideas from our community.

Read the full interview with Ruud to see how his sketch evolved into an icon now loved by Firefox users worldwide.

Smiling man with Q&A icons and a cartoon earth-hugging character on an orange background.

Discover how this fan-made icon made its way into Firefox

Read more

A new option for search, still on your terms

Search should feel flexible, something you can shape based on what you need. That’s why in our last release we introduced Perplexity, an AI-powered answer engine, as an optional search tool on mobile.

Perplexity provides conversational answers with citations, making it easier to get quick summaries without sifting through multiple pages. And, as always, you choose when or whether to use it.

You’ll find Perplexity in the search bar. It’s available globally and Perplexity maintains strict prohibitions against selling or sharing personal data.

It’s one more way Firefox gives you choice without compromising your values.

Created for everyday browsing

Whether you’re translating the web during your commute, giving your home screen a little personality or trying a new way to search, today’s updates reflect a simple goal: make Firefox feel more personal and more you.

And, just like Momo, many of these ideas were shaped by the Firefox community: the artists, contributors, testers and curious users who help us imagine what the browser can be.

We can’t wait to see how you use what’s new.

Take Firefox with you

Download Firefox Mobile

The post Translate the web your way, plus choose the Firefox icon that suits your vibe appeared first on The Mozilla Blog.

The Mozilla BlogYou got more with Firefox in 2025

In 2025, we rolled out one update after another, all aimed at making your browsing better — with more flow, speed, choice, and control over your information and experience. Your window to the internet, whether on desktop, mobile, or across all your devices, has gotten an upgrade this year.

More flow:

Tab Groups

Try Tab Groups to bring calm to tab chaos — whether you keep three tabs open or three thousand. Color-coded groups make it easy to gather related tabs, stay organized, and jump between projects without losing your place. News you read daily? Weekend hobby research? That big trip you’re planning? There’s a group for that.

Vertical Tabs

Vertical Tabs give you another way to browse — stacking tabs along the side of your window instead of across the top. If you like seeing more of your open tabs at a glance or want a tidier layout, Vertical Tabs give you an alternate view that’s easy to scan and move through.

Address Bar Shortcuts

Address Bar Shortcuts let you jump straight to what you’re looking for using simple, natural keywords. You can quickly search things like open tabs, bookmarks, history, or browser actions by typing helpful shortcuts (like @tabs or @bookmarks) right in the bar — an intuitive way to find what you need without breaking your flow.

More speed:

Shake to Summarize (iOS)

On mobile, every moment counts. Shake to Summarize lets you get the key points of what you’re reading with a quick shake or tap. Recipes highlight the steps, sports show the scores, and news pulls out main takeaways — all within seconds. It even earned a Special Mention in TIME’s Best Inventions of 2025. To activate it, you can:

  • Shake your device.
  • Tap the thunderbolt icon in the address bar.
  • Or, from the menu, tap three dots > Summarize Page.

Save Web Apps (Windows)

Firefox lets you save sites to your Windows taskbar and run them as web apps. Once clicked, they open in their own window, so your favorite tools and services are easy to find and quick to launch. You can add any website to the taskbar, just click the web apps icon webappicon  that appears in the address bar. 

Link Previews

Link Previews give you a quick snapshot of what’s behind a link before you open it. No more opening a handful of tabs only to close most of them — just instant context to help you decide where to go next. To activate, click and hold a link for about a second (long press), or right-click on a link and choose ‘Preview Link’ from the menu. 

Unload Tabs

Unload Tabs helps your browser run more efficiently by putting inactive tabs to sleep. They stay visible and ready to reopen instantly when you need them — without slowing down the rest of your browser. Right-click any tab and select ‘Unload Tab’ to try it out.

More choice:

AI Chatbots

Unlike browsers that tie you to one default assistant, Firefox lets you choose the AI chatbot you want, right in the sidebar. Keep your preferred assistant within reach, get quick answers without switching tabs, and browse the way that works best for you.

Perplexity

We integrated Perplexity as a secondary search option, offering conversational answers with citations you can trust. It’s a powerful alternative for people who want direct, verifiable information without digging through long pages of results.

Custom Wallpapers

Now you can personalize the look and feel of your browser with curated wallpaper collections or your own images. Create a space that’s uniquely yours by opening a new tab and clicking the pencil icon to start customizing.

More control:

PDF Editing

Firefox’s built-in PDF editor now includes signatures and commenting tools. Add notes, mark up documents, sign forms, and review everything from one convenient sidebar — no extra software required.

Visual Search

Visual Search powered by Google Lens lets you look up what you see with a quick right-click on any image. This desktop-only feature makes searching more intuitive and curiosity-driven. For now, it requires Google as your default search engine.

Screen Lock for Private Tabs (Android)

Your private browsing is exactly that: private. Screen Lock protects your private tabs using your device’s biometric authentication — fingerprints, facial recognition, or PIN — keeping your activity secure from anyone who picks up your phone. 

Profile Management

Try profiles to help you keep different parts of your digital life separate. Work vs. personal browsing? School vs. gaming? Create profiles for each, switch between them instantly, and stay focused. Feedback from students, professionals, and contributors helped shape the version rolling out today.

Thanks for a great 2025!

You got a lot more with Firefox this year — from smoother tab management and faster ways to find information to new tools that give you more choice and more control. Wherever the internet takes you, we’ll keep building a browser that puts you first. 

To stay on top of the latest in the new year, be the first to know by checking our release notes or What’s New in Firefox. Thanks for being part of the journey.

Take Firefox with you

Download Firefox Mobile

The post You got more with Firefox in 2025  appeared first on The Mozilla Blog.

The Mozilla BlogMeet the artist behind Firefox’s new community-created app icon

Last year, the Firefox team set out to test something fans requested: choosing a custom app icon. The experiment was simple. Offer a small set of options and see how people use them.

The team created early concepts, but experiment lead Gabrielle Lussier noticed something was missing. The designs were clean and functional, but none captured the playful, emotional spark people associate with Firefox. That led the team to revisit a collection of fan art shared during Firefox’s 20th anniversary, and one illustration stood out immediately: a warm, whimsical drawing of Firefox hugging the Earth by Dutch illustrator Ruud Hendriks (@heyheymomodraws). 

“I love that it is reminiscent of our original logo from 2004, but modernized and simplified. It’s also adorable! How could you not love it!” said Gabrielle.

To select the icon, open Firefox and head to Settings → General (iOS) / Customize (Android) → App Icon. 

<figcaption class="wp-element-caption">First community-created app icon now available in Firefox</figcaption>

Ruud is known for the charming, joyful characters in his comic series heyheymomo, and he brings that same energy to this design. He originally created the artwork as a quick doodle for fun. Today, it is the first community-created app icon in Firefox.

In the Q&A below, Ruud shares how the sketch came to life, what inspired it, and what it means to see his work appear inside a browser he has used for years.

Can you tell us a bit about yourself and what inspired you to participate in last year’s Firefox 20th anniversary fan art challenge?

The funny thing is, I participated before the challenge was even a thing! One day, I didn’t know what to draw and somehow felt inspired by the cute little fox icon in my dock. I drew my own version as a super loose doodle, completely on a whim, in just a few minutes. I thought it came out pretty cute, so I posted it on my social media just for fun. People vibed with it, and the Mozilla social team picked it up. A few weeks later, I got a message asking if I wanted to submit it for the challenge since they really liked it. Of course I said yes!

What does Firefox mean to you personally, as a brand, a browser, or a community?

I’ve been on the internet for a long time. Firefox has been my favourite browser since forever, and I’m a bit of a creature of habit, so it’s always stuck with me. I like how lightweight and simple it is. Plus, as a visually minded person, I totally judge books by their covers — and I’ve always loved the Firefox icon. It’s so appealing that it made me want to draw it in the first place.

<figcaption class="wp-element-caption">Momo is just one of the many icons you can select</figcaption>

Where did the idea for your “Firefox hugging the Earth” artwork come from?

It’s my little homage to the older Firefox logo, the one that made me a Firefox fan. The new one is very stylish, but the older one has always had a special place in my heart. My own work is usually very cutesy, with smiley faces and friendly characters, so I just drew my own version of it in that style.

This looks hand-drawn. What tools or techniques did you use to create it?

The initial five-minute doodle was just a quick sketch on my iPad using the app Procreate. Since Mozilla was interested in making it an actual icon, I later created a high-resolution, smoother version using vector art.

How did you feel when you learned your artwork would become one of the official Firefox app icons?

As a longtime Firefox fan, I was over the moon and couldn’t believe all of this came from just a silly doodle I did on a whim. I think that’s the beauty of the internet — how something small and spontaneous can take off like that. I’m really honoured, and I hope you all like my silly, little icon.

What a fan-made icon says about how we build

Ruud’s icon shows how product features can come from small, genuine ideas. His artwork delivered exactly what the team set out to explore: a bit of delight, a touch of nostalgia, and a visual style that feels true to Firefox. This project reflects how Mozilla builds. We listen, we iterate, and we look for ways to bring community creativity into the product. Ruud’s contribution shows how users and artists can shape Firefox in ways that feel both personal and unexpected.


Ruud Hendriks is an illustrator from the Netherlands, specializing in cute and whimsical characters. He has extensive experience working on children’s toys, apps, and games, and now focuses primarily on his own comic series, heyheymomo, which follows the adventures of a dog and frog who are best friends.

His work is lighthearted and designed to brighten your day, even if just for a moment. You can explore his comics on Instagram @heyheymomodraws and find prints at heyheymomo.com.

The post Meet the artist behind Firefox’s new community-created app icon  appeared first on The Mozilla Blog.

The Rust Programming Language BlogMaking it easier to sponsor Rust contributors

TLDR: You can now find a list of Rust contributors that you can sponsor on this page.

Same as with many other open-source projects, Rust depends on a large number of contributors, many of whom make Rust better on a volunteer basis or are funded only for a fraction of their open-source contributions.

Supporting these contributors is vital for the long-term health of the Rust language and its toolchain, so that it can keep its current level of quality, but also evolve going forward. Of course, this is nothing new, and there are currently several ongoing efforts to provide stable and sustainable funding for Rust maintainers, such as the Rust Foundation Maintainer Fund or the RustNL Maintainers Fund. We are very happy about that!

That being said, there are multiple ways of supporting the development of Rust. One of them is sponsoring individual Rust contributors directly, through services like GitHub Sponsors. This makes it possible even for individuals or small companies to financially support their favourite contributors. Every bit of funding helps!

Previously, if you wanted to sponsor someone who works on Rust, you had to go on a detective hunt to figure out who are the people contributing to the Rust toolchain, if they are receiving sponsorships and through which service. This was a lot of work that could provide a barrier to sponsorships. So we simplified it!

Now we have a dedicated Funding page on the Rust website, which helpfully shows members of the Rust Project that are currently accepting funds through sponsoring1. You can click on the name of a contributor to find out what teams they are a part of and what kind of work they do in the Rust Project.

Note that the list of contributors accepting funding on this page is non-exhaustive. We made it opt in, so that contributors can decide on their own whether they want to be listed there or not.

If you ever wanted to support the development of Rust "in the small", it is now simpler than ever.

  1. The order of people on the funding page is shuffled on every page load to reduce unnecessary ordering bias.

Frederik BraunWhy the Sanitizer API is just setHTML()

Sanitizing HTML is the practice of taking a piece of HTML and removing some unwanted elements and attributes. We are specifying an API that will be directly built into the browser. In fact, you can already use it in Firefox Nightly and Chrome Canary.

Nowadays, HTML sanitization is often done …

Frederik BraunThe C3PO Bug in Lego Star Wars: The Complete Saga

Today: Something off topic, to feed the search engines.

My kids and I have a lot of fun with the video game Lego Star Wars: The Complete Saga, which was released in 2007. As it is quite old, the "complete saga" includes only the episodes 1 through 6. Frankly, these …

The Mozilla BlogWhen a video codec wins an Emmy

It’s not every day a video codec wins an Emmy. But yesterday, the Television Academy honored the AV1 specification with a Technology & Engineering Emmy Award, recognizing its impact on how the world delivers video content.

Gold Emmy-style statuette in front of green illuminated panels at an award ceremony podium.<figcaption class="wp-element-caption">The AV1 specification was honored with a Technology & Engineering Emmy Award on Dec. 4, 2025.</figcaption>

The web needed a new video codec

Through the mid-2010s, video codecs were an invisible tax on the web, built on a closed licensing system with expensive, unpredictable fees. Most videos online relied on the H.264 codec, which open-source projects like Firefox could only support without paying MPEG LA license fees thanks to Cisco’s open-source OpenH.264 module.

Especially as demand for video grew, the web needed a next-generation codec to make high-quality streaming faster and more reliable. H.265 promised efficiency gains, but there was no guarantee of another OpenH.264-style arrangement. The risk was another fragmented ecosystem where browsers like Firefox couldn’t play large portions of the web’s video.

Enter AV1

To solve this, Mozilla joined other technical leaders to form the Alliance for Open Media (AOM) in 2015 and started ambitious work on a next-generation codec built from Google’s VP9, Mozilla’s Daala, and Cisco’s Thor.

The result was AV1, released in 2018, which delivered top-tier compression as an open standard under a royalty-free patent policy. It’s now widely deployed across the streaming ecosystem, including hardware decoders and optimized software decoders which allow open-source browsers like Firefox to provide state of the art video compression to all users across the web.

AV1 is also the foundation for the image format AVIF, which is deployed across browsers and provides excellent compression for still and animated images (AVIF is based on a video codec, after all).

The Emmy award reflects the value of open standards, open-source software, and the sustained work by AOM participants and the broader community fighting for an open web.

Looking ahead to AV2

AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.

AOMedia is working on the upcoming release of AV2. It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.

As AV2 arrives, our goal remains unchanged: make video on the web open, efficient, and accessible to everyone.

The post When a video codec wins an Emmy appeared first on The Mozilla Blog.

Tarek ZiadéTwo Years of Building AI in Firefox

When I started working on AI at Mozilla two years ago, I was a Python developer with a background in web services and three months of machine learning experience from working on the Nuclia DB project. I was not someone who had trained models from scratch or built production ML infrastructure. Today, Firefox ships multiple AI features that run entirely on-device, and I helped build the infrastructure that makes that possible. This is a retrospective on what we accomplished and what I learned along the way.

Building the Foundation: The ML Inference Runtime

The first major challenge was creating a runtime that could run machine learning models directly in Firefox. We needed something that worked across platforms, respected user privacy, and didn’t require sending data to external servers.

We built the Firefox ML inference engine on top of two core technologies: the ONNX runtime for executing models, and Transformers.js to simplify the inference work. The architecture we settled on uses a dedicated content process for inference, keeping it isolated from the main browser process. Remote Settings distributes both the runtime and model configurations, while IndexedDB caches downloaded models locally.

One critical evolution was moving away from WebAssembly to run a pure C++ ONNX runtime under Transformers.js. This shift gave us significantly better performance and tighter integration with Firefox’s internals. Getting this right required deep systems-level work, and I was fortunate to work with fantastic engineers like Paul Adenot and Serge Guelton who brought the expertise needed to make it happen.

This multi-process design was crucial. It gave us stability, security, and the ability to update models without shipping new browser versions. We also created our own model hub, giving us control over model distribution while still supporting Hugging Face for developers who want broader model selection.

The API we exposed is deliberately simple. Developers create an engine instance with a task name and model ID, then run inference either synchronously or with streaming output. Behind the scenes, Firefox handles downloading models, managing cache, and choosing the right backend.

The First Real Project: PDF.js Alt Text

With the runtime in place, we needed a real feature to prove it worked. PDF.js alt text generation became that first end-to-end project, and I have written about it in detail before. But looking back now, it was more than just a feature. It was the template for everything that came after.

We chose a Vision Transformer paired with a distilled GPT-2 decoder, compressed to 180 million parameters and under 200MB on disk. The model runs in a couple of seconds on a laptop, generates descriptions locally, and never sends your PDF content anywhere. This shipped in Firefox 130, and it set the standard for how we approach AI: small models, local execution, and privacy by default.

The harder work was not the model architecture. It was dealing with biased training data and building a validation pipeline. COCO and Flickr30k datasets carried gender stereotypes and cultural assumptions. We rebuilt the dataset using GPT-4o annotations to generate cleaner, more neutral captions. Then we built a human-in-the-loop validation app where users could correct outputs, feeding those corrections back into retraining. That iterative cycle was what made the model genuinely useful.

Smart Tab Management and Beyond

Once the runtime was stable and we had proven we could ship a real feature, the next step was expanding to other use cases. Smart Tabs launched in Firefox 141, bringing local AI to tab management.

The feature is simple: right-click a tab group, select “Suggest more tabs for group,” and Firefox analyzes tab titles and descriptions to suggest similar tabs. Users can accept or reject suggestions. The AI runs entirely on-device, so your browsing data stays private.

This project showed that the infrastructure we built was flexible enough to handle different tasks. Smart Tabs did not require a new runtime or a new model distribution system—it reused what we already had. That reusability was proof the architecture was working.

After Smart Tabs, we added many other small features following the same pattern: laser-focused models running on-device for specific tasks. Each one reinforced the core principle: AI should solve real problems without compromising privacy. The infrastructure we built made it cheap to ship new capabilities, and the local-first approach meant users stayed in control of their data.

AI Window and the Server-Side Challenge

The reality is that not all AI features can run locally. Small, specialized models work well on-device, but larger language models (the kind that can handle complex conversations and broad knowledge tasks) still need server-side compute. That is where AI Window comes in.

Announced in November 2025, AI Window is an opt-in feature that brings a conversational AI assistant directly into Firefox. Unlike our local features, this required building infrastructure to support server-side inference while maintaining Firefox’s commitment to user choice and control.

Over the past several months, I have been working on the server-side LLM service and the overall architecture to make sure Firefox can reliably call external services when needed. This meant designing APIs, handling failures gracefully, managing rate limits, and ensuring the system could scale while still respecting user preferences. The work was less about the models themselves and more about building the bridge between Firefox and external AI providers in a way that gives users real control.

This hybrid approach (local AI for privacy-sensitive tasks, server-side AI for compute-intensive ones) is where the browser needs to go. But it raises important questions about privacy.

The Privacy Challenge for Server-Side AI

Local AI gives you perfect privacy: your data never leaves your device. But when a model runs on a server, you are trusting someone else with your prompts, your documents, and your questions. That trust model needs to change.

I am looking forward to industry standards around end-to-end encryption for running LLM inference with full privacy guarantees. The technology already exists. Flower.ai has built federated learning infrastructure with end-to-end encryption that allows large models to run on remote GPUs while keeping user data encrypted. Nvidia has Confidential Computing on H100 and Blackwell GPUs, using hardware-based trusted execution environments to protect code and data during inference. The performance overhead is minimal (often less than 5%) and the privacy guarantees are real.

But here is the problem: none of this is part of the de facto OpenAI API standard that most LLM services use today. If you want to call GPT-4 or Claude or any major hosted model, there is no standardized way to do it with end-to-end encryption or confidential compute guarantees. Your data goes to the server in plaintext, and you have to trust the provider’s privacy policy.

My hope is that it will soon be possible to run inference on the cloud with strong privacy guarantees as a standard feature, not a niche offering. The hardware is ready. The cryptographic techniques exist. What we need now is for the industry to adopt these capabilities as table stakes for AI services. Until that happens, local AI remains the gold standard for privacy, and server-side AI remains a compromise.

What Made This Possible

Building AI features in a browser is not the same as building AI features in a standalone app or a cloud service. The constraints are different. You have limited resources, strict privacy requirements, and the need to work across Windows, macOS, and Linux. Here is what made it work:

  • Starting small: We did not try to build everything at once. The first runtime was minimal. The first model was simple. We added complexity only when we needed it.

  • Privacy as a requirement, not a feature: Every decision started with “can this run locally?” If the answer was no, we either changed the approach or did not build it.

  • Reusable infrastructure: We built the runtime once and used it for multiple features. That meant each new AI capability got cheaper to ship.

  • Learning from real users: The validation app for PDF.js alt text was not just about improving the model—it was about understanding what real people needed. User feedback drove every iteration.

What I Learned

Two years ago, I did not know how to train a model or what ONNX was. Now I have shipped multiple AI features in production. Here is what stuck with me:

  • You do not need a PhD: Machine learning has a reputation for being inaccessible, but the tools have gotten good enough that you can learn by doing. I started with a pre-trained model, fine-tuned it, and kept iterating. Most of the work was engineering, not research.

  • Data quality beats model size: We spent more time cleaning datasets and handling bias than we did optimizing model architecture. A smaller model trained on better data outperformed a larger model trained on messy data.

  • Privacy is possible: The narrative around AI assumes everything needs to run in the cloud. It does not. Local models work. They are fast enough, small enough, and private by default.

  • Building the process matters more than building the model: The validation pipeline, the retraining loop, the distribution system. That infrastructure was more important than any single model.

What is Next

This work is not finished. We plan to iterate on PDF.js alt text, expand Smart Tabs, and bring AI Window to users who want conversational AI in their browser. WebNN is coming, and that will give us even better performance for local models. The Firefox ML runtime is still experimental, but it is stable enough that other teams are starting to build on it.

The bigger challenge is pushing the industry toward privacy-preserving server-side AI. Confidential compute and end-to-end encryption for LLM inference should not be experimental features. They should be the default. I hope to see more providers adopt these technologies and for standards bodies to make privacy guarantees a core part of the AI API specifications.

On a personal level, these two years showed me that AI in the browser is not just possible—it is the right way to do it. Local models give users control. They protect privacy. And they prove that you do not need to send your data to a server farm to get intelligent features. But when you do need server-side compute, it should come with strong privacy guarantees, not just promises.

What excites me the most is running AI locally. That is where the future of open AI lies: not just open models and open weights, but truly open AI that runs on your device, under your control, without gatekeepers or surveillance. The browser is the perfect platform to make that future real.

I am proud of what we built. More importantly, I am excited about what comes next.

Useful links

Firefox Features

Privacy-Preserving AI

The Rust Programming Language Blogcrates.io: Malicious crates finch-rust and sha-rust

Summary

On December 5th, the crates.io team was notified by Kush Pandya from the Socket Threat Research Team of two malicious crates which were trying to cause confusion with the existing finch crate but adding a dependency on a malicious crate doing data exfiltration.

These crates were:

  • finch-rust - 1 version published November 25, 2025, downloaded 28 times, used sha-rust as a dependency
  • sha-rust - 8 versions published between November 20 and November 25, 2025, downloaded 153 times

Actions taken

The user in question, face-lessssss, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

The deletions were performed at 15:52 UTC on December 5th.

We reported the associated repositories to GitHub and the account has been removed there as well.

Analysis

Socket has published their analysis in a blog post.

These crates had no dependent downstream crates on crates.io, and there is no evidence of either of these crates being downloaded outside of automated mirroring and scanning services.

Thanks

Our thanks to Kush Pandya from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Adam Harvey from the Rust Foundation for aiding in the response.

The Rust Programming Language BlogUpdating Rust's Linux musl targets to 1.2.5

Updating Rust's Linux musl targets to 1.2.5

Beginning with Rust 1.93 (slated for stable release on 2026-01-22), the various *-linux-musl targets will all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.

For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.

However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2 years ago), and we have been waiting for newer versions of the libc crate to propagate throughout the ecosystem before shipping the musl update.

A crater run in July 2024 found only about 2.4% of Rust projects were still affected. A crater run in June 2025 found 1.5% of Rust projects were affected. Most of that change is from crater analyzing More Rust Projects. The absolute amount of broken projects went down by 15% while the absolute amount of analyzed projects went up by 35%.

At this point we expect there will be minimal breakage, and most breakage should be resolved by a cargo update. We believe this update shouldn't be held back any longer, as it contains critical fixes for the musl target.

Manual inspection of some of the affected projects indicates they largely haven't run cargo update in 2 years, often because they haven't had any changes in 2 years. Fixing these crates is as easy as cargo update.

Build failures from this change will typically look like "some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified", often specifically for "undefined reference to `open64'", often while trying to build very old versions of the getrandom crate (hence the outsized impact on gamedev projects that haven't updated their dependencies in several years in particular):

Example Build Failure
[INFO] [stderr]    Compiling guess_the_number v0.1.0 (/opt/rustwide/workdir)
[INFO] [stdout] error: linking with `cc` failed: exit status: 1
[INFO] [stdout]   |
[INFO] [stdout]   = note:  "cc" "-m64" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/rcrt1.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crti.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtbeginS.o" "/tmp/rustcMZMWZW/symbols.o" "<2 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/{librand-bff7d8317cf08aa0.rlib,librand_chacha-612027a3597e9138.rlib,libppv_lite86-742ade976f63ace4.rlib,librand_core-be9c132a0f2b7897.rlib,libgetrandom-dc7f0d82f4cb384d.rlib,liblibc-abed7616303a3e0d.rlib,libcfg_if-66d55f6b302e88c8.rlib}.rlib" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libstd-*,libpanic_unwind-*,libobject-*,libmemchr-*,libaddr2line-*,libgimli-*,librustc_demangle-*,libstd_detect-*,libhashbrown-*,librustc_std_workspace_alloc-*,libminiz_oxide-*,libadler2-*,libunwind-*}.rlib" "-lunwind" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libcfg_if-*,liblibc-*}.rlib" "-lc" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{librustc_std_workspace_core-*,liballoc-*,libcore-*,libcompiler_builtins-*}.rlib" "-L" "/tmp/rustcMZMWZW/raw-dylibs" "-Wl,-Bdynamic" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-nostartfiles" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib" "-o" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/guess_the_number-41a068792b5f051e" "-Wl,--gc-sections" "-static-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtendS.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtn.o"
[INFO] [stdout]   = note: some arguments are omitted. use `--verbose` to show all linker arguments
[INFO] [stdout]   = note: /usr/bin/ld: /opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/libgetrandom-dc7f0d82f4cb384d.rlib(getrandom-dc7f0d82f4cb384d.getrandom.828c5c30a8428cf4-cgu.0.rcgu.o): in function `getrandom::util_libc::open_readonly':
[INFO] [stdout]           /opt/rustwide/cargo-home/registry/src/index.crates.io-1949cf8c6b5b557f/getrandom-0.2.8/src/util_libc.rs:150:(.text._ZN9getrandom9util_libc13open_readonly17hdc55d6ead142a889E+0xbc): undefined reference to `open64'
[INFO] [stdout]           collect2: error: ld returned 1 exit status
[INFO] [stdout]           
[INFO] [stdout]   = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
[INFO] [stdout]   = note: use the `-l` flag to specify native libraries to link
[INFO] [stdout]   = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#rustc-link-lib)
[INFO] [stdout] 
[INFO] [stdout] 
[INFO] [stderr] error: could not compile `guess_the_number` (bin "guess_the_number") due to 1 previous error

Updated targets

All Rust musl targets that bundle a copy of musl now bundle 1.2.5. All Rust musl targets now require musl 1.2.5 at a minimum.

The mostly only actually impacts the three "Tier 2 With Host Tools" musl targets which were pinned to musl 1.2.3:

  • aarch64-unknown-linux-musl
  • x86_64-unknown-linux-musl
  • powerpc64le-unknown-linux-musl

The fourth target at this level of support, loongarch64-unknown-linux-musl, is so new that it was always on musl 1.2.5.

Due to an apparent configuration oversight with crosstool-ng, all other targets were already bundling musl 1.2.5. These targets were silently upgraded to musl 1.2.4 in Rust 1.74.0 and silently upgraded to musl 1.2.5 in Rust 1.86. This oversight has been rectified and all targets have been pinned to musl 1.2.5 to prevent future silent upgrades (but hey, no one noticing bodes well for the ecosystem impact of this change). Their documentation has now been updated to reflect the fact that bundling 1.2.5 is actually intentional, and that 1.2.5 is now considered a minimum requirement.

Here are all the updated definitions:

Tier 2 with Host Tools

targetnotes
aarch64-unknown-linux-muslARM64 Linux with musl 1.2.5
powerpc64le-unknown-linux-muslPPC64LE Linux (kernel 4.19, musl 1.2.5)
x86_64-unknown-linux-musl64-bit Linux with musl 1.2.5

Tier 2 without Host Tools

targetstdnotes
arm-unknown-linux-musleabiArmv6 Linux with musl 1.2.5
arm-unknown-linux-musleabihfArmv6 Linux with musl 1.2.5, hardfloat
armv5te-unknown-linux-musleabiArmv5TE Linux with musl 1.2.5
armv7-unknown-linux-musleabiArmv7-A Linux with musl 1.2.5
armv7-unknown-linux-musleabihfArmv7-A Linux with musl 1.2.5, hardfloat
i586-unknown-linux-musl32-bit Linux (musl 1.2.5, original Pentium)
i686-unknown-linux-musl32-bit Linux with musl 1.2.5 (Pentium 4)
riscv64gc-unknown-linux-muslRISC-V Linux (kernel 4.20+, musl 1.2.5)

Tier 3

targetstdhostnotes
hexagon-unknown-linux-muslHexagon Linux with musl 1.2.5
mips-unknown-linux-muslMIPS Linux with musl 1.2.5
mips64-openwrt-linux-musl?MIPS64 for OpenWrt Linux musl 1.2.5
mips64-unknown-linux-muslabi64MIPS64 Linux, N64 ABI, musl 1.2.5
mips64el-unknown-linux-muslabi64MIPS64 (little endian) Linux, N64 ABI, musl 1.2.5
mipsel-unknown-linux-muslMIPS (little endian) Linux with musl 1.2.5
powerpc-unknown-linux-musl?PowerPC Linux with musl 1.2.5
powerpc-unknown-linux-muslspe?PowerPC SPE Linux with musl 1.2.5
powerpc64-unknown-linux-muslPPC64 Linux (kernel 4.19, musl 1.2.5)
riscv32gc-unknown-linux-musl?RISC-V Linux (kernel 5.4, musl 1.2.5 + RISCV32 support patches)
s390x-unknown-linux-muslS390x Linux (kernel 3.2, musl 1.2.5)
thumbv7neon-unknown-linux-musleabihf?Thumb2-mode Armv7-A Linux with NEON, musl 1.2.5
x86_64-unikraft-linux-musl64-bit Unikraft with musl 1.2.5

J.C. JonesReflecting on 10 years of Let’s Encrypt

My friend Christophe Brocas has just published a retrospective on the ten years since we unveiled the ACME protocol to the world. He interviewed me and some colleagues for the piece, and I recommend it! There’s even nice comments on HackerNews, which always makes me smile.

It’s been fun to think back on the early days that made such a dramatic inflection to my career. In early 2014 I was still working on selling turn-key PKI systems based on my SAIFE framework, though the company had been dealt quite a blow by the 2013 U.S. Federal Government shutdown. Having just constructed a certificate authority that would go on to be added to relevant trust lists, it turns out that the freshness of that experience became a key part of my recruitment into what became Let’s Encrypt.

Joining Mozilla in Q4 2014 (basically 3 weeks after this blog post), my new manager Richard Barnes introduced me immediately to Josh Aas and the secret “build a free CA” project. It was to be a side project for me, alongside coming up to speed on NSS. But this was a very fun side project: Given 38U in one datacenter and 62U in a second, design a network that exceeds WebTrust requirements, is usable and maintainable by a small team, and build a functional CA out of it in six months.

Naturally, it actually took thirteen months.

But we pulled it off. We aggressively kept everything as simple as we could, with the one bit of deliberate complexity being to structure Boulder, the CA software in microservices, to have strong network security partitions.

A considerable amount has been written about what happened then. There’s also a recording of me talking a bit about it shortly after.

But thinking back ten years now, to that day on 3 December 2015 when I, sick in bed and operating dose-to-dose on fever reducers, had the privilege of running the commands that opened the public beta… what a ride.

While I’ve done things since, I can’t imagine anything in my career topping helping to launch Let’s Encrypt.

The Rust Programming Language BlogLessons learned from the Rust Vision Doc process

Starting earlier this year, a group of us set on a crazy quest: to author a "Rust vision doc". As we described it in the original project goal proposal:

The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust.

Over the course of this year, the Vision Doc group has gathered up a lot of data. We began with a broad-based survey that got about 4200 responses. After that, we conducted over 70 interviews, each one about 45 minutes, with as broad a set of Rust users as we could find1.

This is the first of a series of blog posts covering what we learned throughout that process and what recommendations we have to offer as a result. This first post is going to go broad. We'll discuss the process we used and where we think it could be improved going forward. We'll talk about some of the big themes we heard -- some that were surprising and others that were, well, not surprising at all. Finally, we'll close with some recommendations for how the project might do more work like this in the future.

The questions we were trying to answer

One of the first things we did in starting out with the vision doc was to meet with a User Research expert, Holly Ellis, who gave us a quick tutorial on how User Research works2. Working with her, we laid out a set of research questions that we wanted to answer. Our first cut was very broad, covering three themes:

  • Rust the technology:
    • "How does Rust fit into the overall language landscape? What is Rust's mission?"
    • "What brings people to Rust and why do they choose to use it for a particular problem...?"
    • "What would help Rust to succeed in these domains...?" (e.g., network systems, embedded)
    • "How can we scale Rust to industry-wide adoption? And how can we ensure that, as we do so, we continue to have a happy, joyful open-source community?"
  • Rust the global project:
    • "How can we improve the experience of using Rust for people across the globe?"
    • "How can we improve the experience of contributing to and maintaining Rust for people across the globe?"
  • Rust the open-source project:
    • "How can we tap into the knowledge, experience, and enthusiasm of a growing Rust userbase to improve Rust?"
    • "How can we ensure that individual or volunteer Rust maintainers are well-supported?"
    • "What is the right model for Foundation-project interaction?"

Step 1: Broad-based survey

Before embarking on individual interviews, we wanted to get a broad snapshot of Rust usage. We also wanted to find a base of people that we could talk to. We created a survey that asked a few short "demographic" questions -- e.g., where does the respondent live, what domains do they work on, how would they rate their experience -- and some open-ended questions about their journey to Rust, what kind of projects they feel are a good fit for Rust, what they found challenging when learning, etc. It also asked for (optional) contact information.

We got a LOT of responses -- over 4200! Analyzing this much data is not easy, and we were very grateful to Kapiche, who offered us free use of their tool to work through the data. ❤

The survey is useful in two ways. First, it's an interesting data-set in its own right, although you have to be aware of selection bias. Second, the survey also gave us something that we can use to cross-validate some of what we heard in 1:1 interviews and to look for themes we might otherwise have missed. And of course it gave us additional names of people we can talk to (though most respondents didn't leave contact information).

Step 2: Interviewing individuals

The next step after the survey was to get out there and talk to people. We sourced people from a lot of places: the survey and personal contacts, of course, but we also sat down with people at conferences and went to meetups. We even went to a Python meetup in an effort to find people who were a bit outside the usual "Rust circle".

When interviewing people, the basic insight of User Experience research is that you don't necessarily ask people the exact questions you want to answer. That is likely to get them speculating and giving you the answer that they think they "ought" to say. Instead, you come at it sideways. You ask them factual, non-leading questions. In other words, you certainly don't say, "Do you agree the borrow checker is really hard?" And you probably don't even say, "What is the biggest pain point you had with Rust?" Instead, you might say, "What was the last time you felt confused by an error message?" And then go from there, "Is this a typical example? If not, what's another case where you felt confused?"

To be honest, these sorts of "extremely non-leading questions" are kind of difficult to do. But they can uncover some surprising results.

We got answers -- but not all the answers we wanted

4200 survey responses and 70 interviews later, we got a lot of information -- but we still don't feel like we have the answers to some of the biggest questions. Given the kinds of questions we asked, we got a pretty good view on the kinds of things people love about Rust and what it offers relative to other languages. We got a sense for the broad areas that people find challenging. We also learned a few things about how the Rust project interacts with others and how things vary across the globe.

What we really don't have is enough data to say "if you do X, Y, and Z, that will really unblock Rust adoption in this domain". We just didn't get into enough technical detail, for example, to give guidance on which features ought to be prioritized, or to help answer specific design questions that the lang or libs team may consider.

One big lesson: there are only 24 hours in a day

One of the things we learned was that you need to stay focused. There were so many questions we wanted to ask, but only so much time in which to do so. Ultimately, we wound up narrowing our scope in several ways:

  • we focused primarily on the individual developer experience, and only had minimal discussion with companies as a whole;
  • we dove fairly deep into one area (the Safety Critical domain) but didn't go as deep into the details of other domains;
  • we focused primarily on Rust adoption, and in particular did not even attempt to answer the questions about "Rust the open-source project".

Another big lesson: haters gonna... stay quiet?

One thing we found surprisingly difficult was finding people to interview who didn't like Rust. 49% of survey respondents, for example, rated their Rust comfort as 4 or 5 out of 5, and only 18.5% said 1 or 2. And of those, only a handful gave contact information.

It turns out that people who think Rust isn't worth using mostly don't read the Rust blog or want to talk about that with a bunch of Rust fanatics.3 This is a shame, of course, as likely those folks have a lot to teach us about the boundaries of where Rust adds value. We are currently doing some targeted outreach in an attempt to grow our scope here, so stay tuned, we may get more data.

One fun fact: enums are Rust's underappreciated superpower

We will do a deeper dive into the things people say that they like about Rust later (hint: performance and reliability both make the cut). One interesting thing we found was the number of people that talked specifically about Rust enums, which allow you to package up the state of your program along with the data it has available in that state. Enums are a concept that Rust adapted from functional languages like OCaml and Haskell and fit into the system programming setting.

"The usage of Enum is a new concept for me. And I like this concept. It's not a class and it's not just a boolean, limited to false or true. It has different states." -- New Rust developer

"Tagged unions. I don't think I've seriously used another production language which has that. Whenever I go back to a different language I really miss that as a way of accurately modeling the domain." -- Embedded developer

Where do we go from here? Create a user research team

When we set out to write the vision doc, we imagined that it would take the form of an RFC. We imagined that RFC identifying key focus areas for Rust and making other kinds of recommendations. Now that we've been through it, we don't think we have the data we need to write that kind of RFC (and we're also not sure if that's the right kind of RFC to write). But we did learn a lot and we are convinced of the importance of this kind of work.

Therefore, our plan is to do the following. First, we're going to write-up a series of blog posts diving into what we learned about our research questions along with other kinds of questions that we encountered as we went.

Second, we plan to author an RFC proposing a dedicated user research team for the Rust org. The role of this team would be to gather data of all forms (interviews, surveys, etc) and make it available to the Rust project. And whenever they can, they would help to connect Rust customers directly with people extending and improving Rust.

The vision doc process was in many ways our first foray into this kind of research, and it taught us a few things:

  • First, we have to go broad and deep. For this first round, we focused on high-level questions about people's experiences with Rust, and we didn't get deep into technical blockers. This gives us a good overview but limits the depth of recommendations we can make.
  • Second, to answer specific questions we need to do specific research. One of our hypotheses was that we could use UX interviews to help decide thorny questions that come up in RFCs -- e.g., the notorious debate between await x and x.await from yesteryear. What we learned is "sort of". The broad interviews we did did give us information about what kinds of things are important to people (e.g., convenience vs reliability, and so forth), and we'll cover some of that in upcoming write-ups. But to shed light on specific questions (e.g., "will x.await be confused for a field access") will really require more specific research. This may be interviews but it could also be other kinds of tests. These are all things though that a user research team could help with.
  • Third, we should find ways to "open the data" and publish results incrementally. We conducted all of our interviews with a strong guarantee of privacy and we expect to delete the information we've gathered once this project wraps up. Our goal was to ensure people could talk in an unfiltered way. This should always be an option we offer people -- but that level of privacy has a cost, which is that we are not able to share the raw data, even widely across the Rust teams, and (worse) people have to wait for us to do analysis before they can learn anything. This won't work for a long-running team. At the same time, even for seemingly innocuous conversations, posting full transcripts of conversations openly on the internet may not be the best option, so we need to find a sensible compromise.
  1. "As wide a variety of Rust users as we could find" -- the last part is important. One of the weaknesses of this work is that we wanted to hear from more Rust skeptics than we did.

  2. Thanks Holly! We are ever in your debt.

  3. Shocking, I know. But, actually, it is a little -- most programmers love telling you how much they hate everything you do, in my experience?

The Rust Programming Language Blogcrates.io: Malicious crates evm-units and uniswap-utils

Summary

On December 2nd, the crates.io team was notified by Olivia Brown from the Socket Threat Research Team of two malicious crates which were downloading a payload that was likely attempting to steal cryptocurrency.

These crates were:

  • evm-units - 13 versions published in April 2025, downloaded 7257 times
  • uniswap-utils - 14 versions published in April 2025, downloaded 7441 times, used evm-units as a dependency

Actions taken

The user in question, ablerust, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

The deletions were performed at 22:01 UTC on December 2nd.

Analysis

Socket has published their analysis in a blog post.

These crates had no dependent downstream crates on crates.io.

Thanks

Our thanks to Olivia Brown from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Walter Pearce and Adam Harvey from the Rust Foundation for aiding in the response.

The Mozilla BlogFast Company names Firefox as a ‘Brand That Matters’

Fast Company has named Firefox to its 2025 “Brands That Matter” list, recognizing companies that go beyond acquiring customers to build meaningful relevance and cultural impact. For us, that honor reflects a simple truth about Mozilla’s mission: We build Firefox to give people agency and choice every time they go online.

In 2025 this brand promise showed up clearly in the features we shipped. One standout was Shake to Summarize, our new AI-powered feature that helps you cut through information overload. With a quick shake or tap, Firefox creates a clean summary of a webpage so you can navigate information with more ease. TIME Magazine gave Shake to Summarize a special mention in its Best Inventions of 2025 list, highlighting how it turns the browser into a helpful assistant instead of a passive window.

Security and privacy remained a constant focus too. Firefox continued to patch high severity vulnerabilities quickly while reinforcing protections that limit tracking and keep more of your data in your hands. 

This year also reminded us that Firefox is much more than a browser. It is a global community. Volunteers, contributors, and supporters helped shape everything from accessibility improvements to support forums to the evolving tab grouping experience. Their work shows up in the small details that make browsing calmer and more intuitive. When people choose Firefox, they join a network of individuals who want the internet to feel more open and more human.

Being named a Brand That Matters is a milestone, but it’s also an ongoing commitment to delivering on our brand promise. As you head into a new year and think about how you want your digital life to feel, you can pick a browser that reflects your values. Choose Firefox, and choose an internet built around your agency and your choices.

Take Firefox with you

Download Firefox Mobile

The post Fast Company names Firefox as a ‘Brand That Matters’ appeared first on The Mozilla Blog.

The Mozilla BlogData War goes digital: Firefox’s card game is now online

Last month, Firefox turned 21, marking two decades of building a web that reflects creativity, independence and trust. At TwitchCon, we celebrated by launching billionaires into space and launching a new card game, Data War

Billionaire Blast Off started with a simple premise: send billionaires into space and have fun doing it. With Data War, we created a fun and often chaotic game where you compete to win a one-way ticket to space for a data-hungry billionaire. We were thrilled that so many people at TwitchCon had a blast playing it.

You can download your own physical deck of Data War here, and now the chaos comes to your browser. The digital version of Data War is now free to play right online.

Cartoon billionaires float in space on toy rockets, yachts, and boxes with text: “Billionaire Blast Off — Powered by Firefox.”

Jump in, stack your deck and blast off with our Data War digital game

Play now

From convention floor to your screen

During TwitchCon, visitors packed tables to duel it out, shouting “Data Grab!” and swapping decks mid-game as billionaires blasted off into orbit. The new online version brings that same energy to everyone.

<figcaption class="wp-element-caption">TwitchCon attendees playing Data War</figcaption>

“If you laugh at something, you have power over it,” said Dave Rowley, Executive Creative Director at Mondo Robot, Firefox’s partner behind Data War. “We took the approach of applying absurdity as a cathartic device wherever we could. That allowed us to balance the realities of billionaires profiting off your data with a sort of reductive sarcasm, creating an outlet for frustration that lets you reclaim some control through a genuinely fun and accessible play experience.” 

How it started and evolved

Rowley worked with the Firefox team to design Data War to be instantly learnable and endlessly chaotic. Think War meets Exploding Kittens: data is currency, billionaires are unpredictable and Firefox shows up to remind players they are the ones who really are in control.

To bring Firefox’s perspective into the game’s creation, the team invited people who actually build Firefox to get involved. One of them was Philip Luk, Firefox’s Director of Engineering, who playtested early versions of the physical game with his teenage kids. Their feedback helped shape Data War into something more dynamic than classic War.

“The game aims to spotlight how big tech companies and their billionaire owners profit from our data ,” said Philip Luk, Firefox Director of Engineering. “My kids and I contributed ideas for new cards that add different strategic twists, make it more than just flipping cards – it’s about reacting, laughing, and watching the chaos unfold.”

“Playtesting with my teens helped us see where we could make it more unpredictable and fun,” Luk added. “Those moments of surprise are what made the game engaging.”

After TwitchCon, the team set out to create a digital version of Data War. A lighter, browser-based game that keeps the spirit of chaos and humor but can be played in quick bursts anytime.

“We wanted to design a digital version of Data War that anyone could play whenever they needed a quick break,” said Benton Persons, Marketing Partnerships and Activations Lead at Mozilla. “That’s what Firefox is all about, taking the stress out of being online because you’re in control of your experience. And really, who doesn’t want to launch little billionaires into space between meetings?”

Built for fun, powered by values

Every match is a reminder of the absurd things billionaires and Big Tech do to profit from your personal data. 

But it’s also a reminder that players are the ones in control and ultimately launch those billionaires into space.

“Our goal was simple: make Data War just as chaotic and fun in your browser as it is on a table. So we streamlined the rules and added digital-only moments like animations, fast turns and story hits, like the Data Grab minigame and Billionaire Blast Off win sequences, that keep every round feeling fresh, even when you’re playing solo,” said Rowley. “When the table erupts in laughter, that’s when you know you’ve won. 

Play now

You can play Data War Digital right now at: https://billionaireblastoff.firefox.com/datawar

Take it offline and download your own version of the physical deck to play with friends, because launching billionaires into space is even better together.

The post Data War goes digital: Firefox’s card game is now online appeared first on The Mozilla Blog.

Mozilla ThunderbirdState of the Thunder 14: The 2026 Mobile Roadmap

Welcome back to the latest State of the Thunder. In the last Community Office Hours, Heather and Monica sat down with members of the mobile team in a retrospective to celebrate the first year of the Thunderbird for Android app. In this recording, however, Alessandro is leading viewers through the upcoming mobile roadmap, both for Android and iOS.

Looking ahead for Android

Key Priorities

Next year’s top priority is rearchitecture and core maintenance. The underlying code behind the Thunderbird for Android app, which was built on top of K-9 Mail, is 15 years old. That’s ancient in software terms. This work will make the app more stable and reduce the odds of developers breaking the app through their changes. This is a broad initiative with lots of elements. This includes bringing consistency across apps, including UI. For several reasons, we won’t be continuing with Material UI. Instead, we’ll be using our own homegrown Bolt UI.

Another feature the mobile team would like to prioritize is continuity with Thunderbird Pro. Since the exact delivery dates for these services have not yet been defined, setting priorities is difficult, but the team has confirmed the Thundermail integration will come first. Integrating Send, our end-to-end encryption file share, will be trickier when it comes to mobile. However, this may ultimately enable encrypted sync for user account settings as a future feature. 

The team also plans to modernize the Message List and Message View, in addition to ensuring they work well. As users probably spend most of their time on these screens, this is key to get right. We want to have an experience that compares to other mobile mail apps in a good way.

Additional Goals

Several features and feature explorations fill out the rest of the Android roadmap: This includes HTML Signatures that can be synced from the desktop app. The team will also explore providing JMAP support, Exchange support, and calendar support. It’s been a while since the Android app has added a new protocol, but Thundermail includes support for JMAP, and the desktop app monthly release now includes Exchange support. It’s important for users to have a similar experience across the apps. For calendar explorations, we’ll determine whether it’s better to integrate with the native Android calendar or build a calendar section for the app.

Prioritization

Our urgent priorities for next year are the Rearchitecture and Core Maintenance, and the Message View improvements. If we complete both of these goals with our growing yet still small team, we’ll consider that a realistic success!

Our plans for iOS

The mobile team also includes our iOS developers, and we have some broad goals for iOS development next year. iOS is, Alessandro notes, a locked-in, opinionated platform and we want to make future iOS app users comfortable using Thunderbird on their chosen platform. Any iOS roadmap also needs to balance developer and community satisfaction. Prioritizing IMAP as the first supported protocol reflects this as most users still rely on it. Once that’s completed, we can begin work on JMAP to help lead the way for other clients to adopt it. This is the same principle behind adding Exchange support to our apps. While it may be a proprietary protocol, adding it opens up Thunderbird for many people who want to use it but currently can’t.

Watch the Video (also on PeerTube)

Listen to the Podcast

The post State of the Thunder 14: The 2026 Mobile Roadmap appeared first on The Thunderbird Blog.

The Mozilla BlogCelebrating the contributors that power Mozilla Support

Every day, Firefox users around the world turn to Mozilla Support (SUMO) with a question, a hiccup or just a little curiosity. It’s community-powered – contributors offer answers and support to make someone’s day a little easier.

We celebrated this global community last month with Ask-A-Fox, a weeklong virtual event that brought together longtime contributors, newcomers and Mozilla staffers. The idea was simple: connect across time zones, trade tips and yes, answer questions.

Contributor appreciation, AMAs and an emoji hunt

For one lively week, contributors across Firefox and Thunderbird rallied together. Reply rates soared, response times dropped, and the forums buzzed with renewed energy. But the real story was the sense of connection.

There were live Ask Me Anything sessions with Mozilla’s WebCompat, Web Performance, and Thunderbird teams. There was even a playful 🦊 + ⚡ emoji hunt through our Knowledge Base.

“That AMA was really interesting,” said longtime Firefox contributor Paul. “I learned a lot and I recommend those that could not watch it live catch the recording as I am sure it will be very useful in helping users in SUMO.”

Ask-A-Fox was a celebration of people: long-time contributors, brand-new faces and everyone in between. Here are just a few standout contributors:

  • Firefox Desktop (including Enterprise)
    Paul, Denyshon, Jonz4SUSE, @next, jscher2000
  • Firefox for Android
    Paul, TyDraniu, GerardoPcp04, Mad_Maks, sjohnn
  • Firefox for iOS
    Paul, Simon.c.lord, TyDraniu, Mad_Maks, Mozilla-assistent
  • Thunderbird (including Android)
    Davidsk, Sfhowes, Mozilla98, MattAuSupport, Christ1

Newcomers mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7 also made a big impact.

New contributor Shirmaya John said, “I love helping people, and I’m passionate about computers, so assisting with bugs or other tech issues really makes my day. I’m excited to grow here!” 

Contributor Vincent won our Staff Award for the highest number of replies during the week.

“Ask a Fox highlights the incredible collaborative spirit of our community. A reminder of what we can achieve when we unite around a shared goal,” said Kiki Kelimutu, a senior community manager at Mozilla.

Firefox has been powered by community from the start

As Mozilla’s communities program manager, I’ve seen firsthand how genuine connection fuels everything we do. Members of our community aren’t just answering questions; they’re about building relationships, learning together, and showing up for one another with authenticity and care.

Mozilla is built by people who believe the internet should be open and accessible to all, and our community is the heartbeat of that vision. What started back in 2007 (and found its online home in 2010 at support.mozilla.org) has grown into a global network of contributors helping millions of Firefox users find answers, share solutions and get back on their Firefox journey.

Every question answered not only helps a user, it helps us build a better Firefox. By surfacing real issues and feedback, our community shapes the course of our products and keeps the web stronger for everyone.

Join the next Ask-A-Fox

Ask-A-Fox is a celebration of what makes Mozilla unique: our people.

As someone who’s spent years building communities, I know that lasting engagement doesn’t come from numbers or dashboards. It comes from treating contributors as individuals — people who bring their own stories, skills, and care to the table.

When Mozillians come together to share knowledge, laughter or even a few emojis, the result is more than faster replies. It’s a connection.

Two more Ask-A-Fox events are already planned for next year, continuing the work of building communities that make the web more open and welcoming.

If you’ve ever wanted to make the web a little more human, come join us. Because every answer, every conversation, and every connection helps keep Firefox thriving.

A cheerful cartoon fox head with a speech bubble containing three dots, surrounded by multiple chat bubbles on a warm orange-to-yellow gradient background. The fox appears to be communicating, evoking a friendly and conversational tone.

Join us in shaping the web

Sign up here

The post Celebrating the contributors that power Mozilla Support appeared first on The Mozilla Blog.

The Rust Programming Language BlogInterview with Jan David Nose

On the Content Team, we had our first whirlwind outing at RustConf 2025 in Seattle, Washington, USA. There we had a chance to speak with folks about interesting things happening in the Project and the wider community.

Jan David Nose, Infrastructure Team

In this interview, Xander Cesari sits down with Jan David Nose, then one of the full-time engineers on the Infrastructure Team, which maintains and develops the infrastructure upon which Rust is developed and deployed -- including CI/CD tooling and crates.io.

We released this video on an accelerated timeline, some weeks ago, in light of the recent software supply chain attacks, but the interview was conducted prior to the news of compromised packages in other languages and ecosystems.

Check out the interview here or click below.


Transcript

Xander Cesari: Hey, this is Xander Cesari with the Rust Project Content Team, recording on the last hour of the last day of RustConf 2025 here in Seattle. So it's been a long and amazing two days. And I'm sitting down here with a team member from the Rust Project Infra Team, the unsung heroes of the Rust language. Want to introduce yourself and kind of how you got involved?

Jan David Nose: Yeah, sure. I'm JD. Jan David is the full name, but especially in international contexts, I just go with JD. I've been working for the Rust Foundation for the past three years as a full-time employee and I essentially hit the jackpot to work full-time on open source and I've been in the Infra Team of the Rust Project for the whole time. For the past two years I've led the team together with Jake. So the Infra Team is kind of a thing that lets Rust happen and there's a lot of different pieces.

Xander Cesari: Could you give me an overview of the responsibility of the Infra Team?

Jan David Nose: Sure. I think on a high level, we think about this in terms of, we serve two different groups of people. On one side, we have users of the language, and on the other side, we really try to provide good tooling for the maintainers of the language.

Jan David Nose: Starting with the maintainer side, this is really everything about how Rust is built. From the moment someone makes a contribution or opens a PR, we maintain the continuous integration that makes sure that the PR actually works. There's a lot of bots and tooling helping out behind the scenes to kind of maintain a good status quo, a sane state. Lots of small things like triage tools on GitHub to set labels and ping people and these kinds of things. And that's kind of managed by the Infra Team at large.

Jan David Nose: And then on the user side, we have a lot of, or the two most important things are making sure users can actually download Rust. We don't develop crates.io, but we support the infrastructure to actually ship crates to users. All the downloads go through content delivery networks that we provide. The same for Rust releases. So if I don't do my job well, which has happened, there might be a global outage of crates.io and no one can download stuff. But those are kind of the two different buckets of services that we run and operate.

Xander Cesari: Gotcha. So on the maintainer side, the Rust organization on GitHub is a large organization with a lot of activity, a lot of code. There's obviously a lot of large code bases being developed on GitHub, but there are not that many languages the size of Rust being developed on GitHub. Are there unique challenges to developing a language and the tooling that's required versus developing other software projects?

Jan David Nose: I can think of a few things that have less to do with the language specifically, but with some of the architecture decisions that were made very early on in the life cycle of Rust. So one of the things that actually caused a lot of headache for mostly GitHub, and then when they complained to us, for us as well, is that for a long, long time, the index for crates.io was a Git repo on GitHub. As Rust started to grow, the activity on the repo became so big that it actually caused some issues, I would say, in a friendly way on GitHub, just in terms of how much resources that single repository was consuming. That then kind of started this work on a web-based, HTTP-based index to shift that away. That's certainly one area where we've seen how Rust has struggled a little bit with the platform, but also the platform provider struggled with us.

Jan David Nose: I think for Rust itself, especially when we look at CI, we really want to make sure that Rust works well on all of the targets and all the platforms we support. That means we have an extremely wide CI pipeline where, for every Tier 1 target, we want to run all the tests, we want to build the release artifacts, we want to upload all of that to S3. We want to do as much as we reasonably can for Tier 2 targets and, to a lesser extent, maybe even test some stuff on Tier 3. That has turned into a gigantic build pipeline. Marco gave a talk today on what we've done with CI over the last year. One of the numbers that came out of doing the research for this talk is that we accumulate over three million build minutes per month, which is about six years of CPU time every month.

Jan David Nose: Especially when it comes to open source projects, I think we're one of the biggest consumers of GitHub Actions in that sense. Not the biggest in total; there are definitely bigger commercial projects. But that's a unique challenge for us to manage because we want to provide as good a service as we can to the community and make sure that what we ship is high quality. That comes at a huge cost in terms of scaling. As Rust gets more popular and we want to target more and more platforms, this is like a problem that just continues to grow.

Jan David Nose: We'll probably never remove a lot of targets, so there's an interesting challenge to think about. If it's already big now, how does this look in 5 years, 10 years, 15 years, and how can we make sure we can maintain the level of quality we want to ship? When you build and run for a target in the CI pipeline, some of those Tier 1 targets you can just ask a cloud service provider to give you a VM running on that piece of hardware, but some of them are probably not things that you can just run in the cloud.

Xander Cesari: Is there some HIL (Hardware-In-the-Loop) lab somewhere?

Jan David Nose: So you're touching on a conversation that's happening pretty much as we speak. So far, as part of our target tier policy, there is a clause that says it needs to be able to run in CI. That has meant being very selective about only promoting things to Tier 1 that we can actually run and test. For all of this, we had a prerequisite that it runs on GitHub Actions. So far we've used very little hardware that is not natively supported or provided by GitHub.

Jan David Nose: But this is exactly the point with Rust increasing in popularity. We just got requests to support IBM platforms and RISC-V, and those are not natively supported on GitHub. That has kicked off an internal conversation about how we even support this. How can we as a project enable companies that can provide us hardware to test on? What are the implications of that?

Jan David Nose: On one side, there are interesting constraints and considerations. For example, you don't want your PRs to randomly fail because someone else's hardware is not available. We're already so resource-constrained on how many PRs we can merge each day that adding noise to that process would really slow down contributions to Rust. On the other side, there are security implications. Especially if we talk about promoting something to Tier 1 and we want to build release artifacts on that hardware, we need to make sure that those are actually secure and no one sneaks a back door into the Rust compiler target for RISC-V.

Jan David Nose: So there are interesting challenges for us, especially in the world we live in where supply chain security is a massive concern. We need to figure out how we can both support the growth of Rust and the growth of the language, the community, and the ecosystem at large while also making sure that the things we ship are reliable, secure, and performant. That is becoming an increasingly relevant and interesting piece to work on. So far we've gotten away with the platforms that GitHub supports, but it's really cool to see that this is starting to change and people approach us and are willing to provide hardware, provide sponsorship, and help us test on their platforms. But essentially we don't have a good answer for this yet. We're still trying to figure out what this means, what we need to take into consideration, and what our requirements are to use external hardware.

Xander Cesari: Yeah, everyone is so excited about Rust will run everywhere, but there's a maintenance cost there that is almost exponential in scope.

Jan David Nose: It's really interesting as well because there's a tension there. I think with IBM, for example, approaching us, it's an interesting example. Who has IBM platforms at home? The number of users for that platform is really small globally, but IBM also invests heavily in Rust, tries to make this happen, and is willing to provide the hardware.

Jan David Nose: For us, that leads to a set of questions. Is there a line? Is there a certain requirement? Is there a certain amount of usage that a platform would need for us to promote it? Or do we say we want to promote as much as we can to Tier 1? This is a conversation we haven't really had to have yet. It's only now starting to creep in as Rust is adopted more widely and companies pour serious money and resources into it. That's exciting to see.

Jan David Nose: In this specific case, companies approach the Infra Team to figure out how we can add their platforms to CI as a first step towards Tier 1 support. But it's also a broader discussion we need to have with larger parts of the Rust Project. For Tier 1 promotions, for example, the Compiler Team needs to sign off, Infra needs to sign off. Many more people need to be involved in this discussion of how we can support the growing needs of the ecosystem at large.

Xander Cesari: I get the feeling that's going to be a theme throughout this interview.

Jan David Nose: 100%.

Xander Cesari: So one other tool that's part of this pipeline that I totally didn't know about for a long time, and I think a talk at a different conference clued me into it, is Crater. It's a tool that attempts to run all of the Rust code it can find on the internet. Can you talk about what that tool does and how it integrates into the release process?

Jan David Nose: Whenever someone creates a pull request on GitHub to add a new feature or bug fix to the Rust compiler, they can start what's called a Crater run, or an experiment. Crater is effectively a large fleet of machines that tries to pull in as many crates as it can. Ideally, we would love to test all crates, but for a variety of reasons that's not possible. Some crates simply don't build reliably, so we maintain lists to exclude those. From the top of my head, I think we currently test against roughly 60% of crates.

Jan David Nose: The experiment takes the code from your pull request, builds the Rust compiler with it, and then uses that compiler to build all of these crates. It reports back whether there are any regressions related to the change you proposed. That is a very important tool for us to maintain backwards compatibility with new versions and new features in Rust. It lets us ask: does the ecosystem still compile if we add this feature to the compiler, and where do we run into issues? Then, and this is more on the Compiler Team side, there's a decision about how to proceed. Is the breakage acceptable? Do we need to adjust the feature? Having Crater is what makes that conversation possible because it gives us real data on the impact on the wider ecosystem.

Xander Cesari: I think that's so interesting because as more and more companies adopt Rust, they're asking whether the language is going to be stable and backward compatible. You hear about other programming languages that had a big version change that caused a lot of drama and code changes. The fact that if you have code on crates.io, the Compiler Team is probably already testing against it for backwards compatibility is pretty reassuring.

Jan David Nose: Yeah, the chances are high, I would say. Especially looking at the whole Python 2 to Python 3 migration, I think as an industry we've learned a lot from those big version jumps. I can't really speak for the Compiler Team because I'm not a member and I wasn't involved in the decision-making, but I feel this is one of the reasons why backwards compatibility is such a big deal in Rust's design. We want to make it as painless as possible to stay current, stay up to date, and make sure we don't accidentally break the language or create painful migration points where the entire ecosystem has to move at once.

Xander Cesari: Do you know if there are other organizations pulling in something like Crater and running it on their own internal crate repositories, maybe some of the big tech companies or other compiler developers or even other languages? Or is this really bespoke for the Rust compiler team?

Jan David Nose: I don't know of anyone who runs Crater itself as a tool. Crater is built on a sandboxing framework that we also use in other places. For example, docs.rs uses some of the same underlying infrastructure to build all of the documentation. We try to share as much as we can of the functionality that exists in Crater, but I'm not aware of anyone using Crater in the same way we do.

Xander Cesari: Gotcha. The other big part of your job is that the Infra Team works on supporting maintainers, but it also supports users and consumers of Rust who are pulling from crates.io. It sounds like crates.io is not directly within your team, but you support a lot of the backend there.

Jan David Nose: Yeah, exactly. crates.io has its own team, and that team maintains the web application and the APIs. The crates themselves, all the individual files that people download, are hosted within our infrastructure. The Infra Team maintains the content delivery network that sits in front of that. Every download of a crate goes through infrastructure that we maintain. We collaborate very closely with the crates.io team on this shared interface. They own the app and the API, and we make sure that the files get delivered to the end user.

Xander Cesari: So it sounds like there's a lot of verification of the files that get uploaded and checks every time someone pushes a new version to crates.io. That part all happens within crates.io as an application.

Jan David Nose: Cargo uses the crates.io API to upload the crate file. crates.io has a lot of internal logic to verify that it is valid and that everything looks correct. For us, as the Infra Team, we treat that as a black box. crates.io does its work, and if it is happy with the upload, it stores the file in S3. From that point onward, infrastructure makes sure that the file is accessible and can be downloaded so people can start using your crate.

Xander Cesari: In this theme of Rust being a bit of a victim of its own success, I assume all of the traffic graphs and download graphs are very much up and to the right.

Jan David Nose: On the Foundation side, one of our colleagues likes to check how long it takes for one billion downloads to happen on crates.io, and that number has been falling quickly. I don't remember what it was three years ago, but it has come down by orders of magnitude. In our download traffic we definitely see exponential growth. Our traffic tends to double year over year, and that trend has been pretty stable. It really seems like Rust is getting a lot of adoption in the ecosystem and people are using it for more and more things.

Xander Cesari: How has the Infra Team scaled with that? Are you staying ahead of it, or are there a lot of late nights?

Jan David Nose: There have definitely been late nights. In the three years I've been working in the Infra Team, every year has had a different theme that was essentially a fire to put out.

Jan David Nose: It changes because we fix one thing and then the next thing breaks. So far, luckily, those fires have been mostly sequential, not parallel. When I joined, bandwidth was the big topic. Over the last year, it has been more about CI. About three years ago, we hit this inflection point where traffic was doubling and the sponsorship capacity we had at the time was reaching its limits.

Jan David Nose: Two or three years ago, Fastly welcomed us into their Fast Forward program and has been sponsoring all of our bandwidth since then. That has mostly helped me sleep at night. It has been a very good relationship. They have been an amazing partner and have helped us at every step to remove the fear that we might hit limits. They are very active in the open source community at large; most famously they also sponsor PyPI and the Python ecosystem, compared to which we're a tiny fish in a very big pond. That gives us a lot of confidence that we can sustain this growth and keep providing crates and releases at the level of quality people expect.

Xander Cesari: In some ways, Rust did such a good job of making all of that infrastructure feel invisible. You just type Cargo commands into your terminal and it feels magical.

Jan David Nose: I'm really happy about that. It's an interesting aspect of running an infrastructure team in open source. If you look at the ten-year history since the first stable release, or even the fifteen years since Rust really started, infrastructure was volunteer-run for most of that time. I've been here for three years, and I was the first full-time infrastructure engineer. So for ten to twelve years, volunteers ran the infrastructure.

Jan David Nose: For them, it was crucial that things just worked, because you can't page volunteers in the middle of the night because a server caught fire or downloads stopped working. From the beginning, our infrastructure has been designed to be as simple and as reliable as possible. The same is true for our CDNs. I always feel a bit bad because Fastly is an amazing sponsor. Every time we meet them at conferences or they announce new features, they ask whether we want to use them or talk about how we use Fastly in production. And every time I have to say: we have the simplest configuration possible. We set some HTTP headers. That's pretty much it.

Jan David Nose: It's a very cool platform, but we use the smallest set of features because we need to maintain all of this with a very small team that is mostly volunteer-based. Our priority has always been to keep things simple and reliable and not chase every fancy new technology, so that the project stays sustainable.

Xander Cesari: Volunteer-based organizations seem to have to care about work-life balance, which is probably terrific, and there are lessons to be learned there.

Jan David Nose: Yeah, it's definitely a very interesting environment to work in. It has different rules than corporations or commercial teams. We have to think about how much work we can do in a given timeframe in a very different way, because it's unpredictable when volunteers have time, when they're around, and what is happening in their lives.

Jan David Nose: Over the last few years, we've tried to reduce the number of fires that can break out. And when they do happen, we try to shield volunteers from them and take that work on as full-time employees. That started with me three years ago. Last year Marco joined, which increased the capacity we have, because there is so much to do on the Infra side that even with me working full-time, we simply did not have enough people.

Xander Cesari: So you're two full-time and everything else is volunteer.

Jan David Nose: Exactly. The team is around eight people. Marco and I work full-time and are paid by the Rust Foundation to focus exclusively on infrastructure. Then we have a handful of volunteers who work on different things.

Jan David Nose: Because our field of responsibility is so wide, the Infra Team works more in silos than other teams might. We have people who care deeply about very specific parts of the infrastructure. Otherwise there is simply too much to know for any one person. It has been a really nice mix, and it's amazing to work with the people on the team.

Jan David Nose: As someone who is privileged enough to work full-time on this and has the time and resources, we try to bear the bigger burden and create a space that is fun for volunteers to join. We want them to work on exciting things where there is less risk of something catching fire, where it's easier to come in, do a piece of work, and then step away. If your personal life takes over for two weeks, that's okay, because someone is there to make sure the servers and the lights stay on.

Jan David Nose: A lot of that work lives more on the maintainer side: the GitHub apps, the bots that help with triage. It's less risky if something goes wrong there. On the user side, if you push the wrong DNS setting, as someone might have done, you can end up in a situation where for 30 minutes no one can download crates. And in this case, "no one" literally means no user worldwide. That's not an experience I want volunteers to have. It's extremely stressful and was ultimately one of the reasons I joined in the first place—there was a real feeling of burnout from carrying that responsibility.

Jan David Nose: It's easier to carry that as a full-timer. We have more time and more ways to manage the stress. I'm honestly extremely amazed by what the Infra Team was able to do as volunteers. It's unbelievable what they built and how far they pushed Rust to get to where we are now.

Xander Cesari: I think anyone who's managing web traffic in 2025 is talking about traffic skyrocketing due to bots and scrapers for AI or other purposes. Has that hit the Rust network as well?

Jan David Nose: Yeah, we've definitely seen that. It's handled by a slightly different team, but on the docs.rs side in particular we've seen crawlers hit us hard from time to time, and that has caused noticeable service degradation. We're painfully aware of the increase in traffic that comes in short but very intense bursts when crawlers go wild.

Jan David Nose: That introduces a new challenge for our infrastructure. We need to figure out how to react to that traffic and protect our services from becoming unavailable to real users who want to use docs.rs to look up something for their work. On the CDN side, our providers can usually handle the traffic. It is more often the application side where things hurt.

Jan David Nose: On the CDN side we also see people crawling crates.io, presumably to vacuum up the entire crates ecosystem into an LLM. Fortunately, over the last two years we've done a lot of work to make sure crates.io as an application is less affected by these traffic spikes. Downloads now bypass crates.io entirely and go straight to the CDN, so the API is not hit by these bursts. In the past, this would have looked like a DDoS attack, with so many requests from so many sources that we couldn't handle it.

Jan David Nose: We've done a lot of backend work to keep our stack reliable, but it's definitely something that has changed the game over the last year. We can clearly see that crawlers are much more active than before.

Xander Cesari: That makes sense. I'm sure Fastly is working on this as well. Their business has to adapt to be robust to this new internet.

Jan David Nose: Exactly. For example, one of the conversations we're having right now is about docs.rs. It's still hosted on AWS behind CloudFront, but we're talking about putting it behind Fastly because through Fastly we get features like bot protection that can help keep crawlers out.

Jan David Nose: This is a good example of how our conversations have changed in the last six months. At the start of the year I did not think this would be a topic we would be discussing. We were focused on other things. For docs.rs we have long-term plans to rebuild the infrastructure that powers it, and I expected us to spend our energy there. But with the changes in the industry and everyone trying to accumulate as much data as possible, our priorities have shifted. The problems we face and the order in which we tackle them have changed.

Xander Cesari: And I assume as one of the few paid members of a mostly volunteer team, you often end up working on the fires, not the interesting next feature that might be more fun.

Jan David Nose: That is true, although it sounds a bit negative to say I only get to work on fires. Sometimes it feels like that because, as with any technology stack, there is a lot of maintenance overhead. We definitely pay that price on the infrastructure side.

Jan David Nose: Marco, for example, spent time this year going through all the servers we run, cataloging them, and making sure they're patched and on the latest operating system version. We updated our Ubuntu machines to the latest LTS. It feels a bit like busy work—you just have to do it because it's important and necessary, but it's not the most exciting project.

Jan David Nose: On the other hand, when it comes to things like CDN configuration and figuring out how bot protection features work and whether they are relevant to us, that is also genuinely interesting work. It lets us play with new tools vendors provide, and we're working on challenges that the wider industry is facing. How do you deal with this new kind of traffic? What are the implications of banning bots? How high is the risk of blocking real users? Sometimes someone just misconfigures a curl script, and from the outside it looks like they're crawling our site.

Jan David Nose: So it's an interesting field to work in, figuring out how we can use new features and address new challenges. That keeps it exciting even for us full-timers who do more of the "boring" work. We get to adapt alongside how the world around us is changing. If there's one constant, it's change.

Xander Cesari: Another ripped-from-the-headlines change around this topic is software supply chain security, and specifically xz-utils and the conversation around open source security. How much has that changed the landscape you work in?

Jan David Nose: The xz-utils compromise was scary. I don't want to call it a wake-up call, because we've been aware that supply chain security is a big issue and this was not the first compromise. But the way it happened felt very unsettling. You saw an actor spend a year and a half building social trust in an open source project and then using that to introduce a backdoor.

Jan David Nose: Thinking about that in the context of Rust: every team in the project talks about how we need more maintainers, how there's too much workload on the people who are currently contributing, and how Rust's growth puts strain on the organization as a whole. We want to be an open and welcoming project, and right now we also need to bring new people in. If someone shows up and says, "I'm willing to help, please onboard me," and they stick around for a year and then do something malicious, we would be susceptible to that. I don't think this is unique to Rust. This is an inherent problem in open source.

Xander Cesari: Yeah, it's antithetical to the culture.

Jan David Nose: Exactly. So we're trying to think through how we, as a project and as an ecosystem, deal with persistent threat actors who have the time and resources to play a long game. Paying someone to work full-time on open source for a year is a very different threat model than what we used to worry about.

Jan David Nose: I used to joke that the biggest threat to crates.io was me accidentally pulling the plug on a CDN. I think that has changed. Today the bigger threat is someone managing to insert malicious code into our releases, our supply chain, or crates.io itself. They could find ways to interfere with our systems in ways we're simply not prepared for, where, as a largely volunteer organization, we might be too slow to react to a new kind of attack.

Jan David Nose: Looking back over the last three years, this shift became very noticeable, especially after the first year. Traffic was doubling, Rust usage was going up a lot, and there were news stories about Rust being used in the Windows kernel, in Android, and in parts of iOS. Suddenly Rust is everywhere. If you want to attack "everywhere," going after Rust becomes attractive. That definitely puts a target on our back and has changed the game.

Jan David Nose: I'm very glad the Rust Foundation has a dedicated security engineer who has done a lot of threat modeling and worked with us on infrastructure security. There's also a lot of work happening specifically around the crates ecosystem and preventing supply chain attacks through crates. Luckily, it's not something the Infra side has to solve alone. But it is getting a lot more attention, and I think it will be one of the big challenges for the future: how a mostly volunteer-run project keeps up with this looming threat.

Xander Cesari: And it is the industry at large. This is not a unique problem to the Rust package manager. All package registries, from Python to JavaScript to Nix, deal with this. Is there an industry-wide conversation about how to help each other out and share learnings?

Jan David Nose: Yeah, there's definitely a lot happening. I have to smile a bit because, with a lot of empathy but also a bit of relief, we sometimes share news when another package ecosystem gets compromised. It is a reminder that it's not just us, sometimes it's npm this time.

Jan David Nose: We really try to stay aware of what's happening in the industry and in other ecosystems: what new threats or attack vectors are emerging, what others are struggling with. Sometimes that is security; sometimes it's usability. A year and a half ago, for example, npm had the "everything" package where someone declared every package on npm as a dependency, which blew up the index. We look at incidents like that and ask whether crates.io would struggle with something similar and whether we need to make changes.

Jan David Nose: On the security side we also follow closely what others are doing. In the packaging community, the different package managers are starting to come together more often to figure out which problems everyone shares. There is a bit of a joke that we're all just shipping files over the internet. Whether it's an npm package or a crate, ultimately it's a bunch of text files in a zip. So from an infrastructure perspective the problems are very similar.

Jan David Nose: These communities are now talking more about what problems PyPI has, what problems crates.io has, what is happening in the npm space. One thing every ecosystem has seen—even the very established ones—is a big increase in bandwidth needs, largely connected to the emergence of AI. PyPI, for example, publishes download charts, and it's striking. Python had steady growth—slightly exponential, but manageable—for many years. Then a year or two ago you see a massive hockey stick. People discovered that PyPI was a great distribution system for their models. There were no file size limits at the time, so you could publish precompiled GPU models there.

Jan David Nose: That pattern shows up everywhere. It has kicked off a new era for packaging ecosystems to come together and ask: in a time where open source is underfunded and traffic needs keep growing, how can we act together to find solutions to these shared problems? crates.io is part of those conversations. It's interesting to see how we, as an industry, share very similar problems across ecosystems—Python, npm, Rust, and others.

Xander Cesari: With a smaller, more hobbyist-focused community, you can have relaxed rules about what goes into your package manager. Everyone knows the spirit of what you're trying to do and you can get away without a lot of hard rules and consequences. Is the Rust world going to have to think about much harder rules around package sizes, allowed files, and how you're allowed to distribute things?

Jan David Nose: Funnily enough, we're coming at this from the opposite direction. Compared to other ecosystems, we've always had fairly strict limits. A crate can be at most around ten megabytes in size. There are limits on what kinds of files you can put in there. Ironically, those limits have helped us keep traffic manageable in this period.

Jan David Nose: At the same time, there is a valid argument that these limits may not serve all Rust use cases. There are situations where you might want to include something precompiled in your crate because it is hard to compile locally, takes a very long time, or depends on obscure headers no one has. I don't think we've reached the final state of what the crates.io package format should look like.

Jan David Nose: That has interesting security implications. When we talk about precompiled binaries or payloads, we all have that little voice in our head every time we see a curl | sh command: can I trust this? The same is true if you download a crate that contains a precompiled blob you cannot easily inspect.

Jan David Nose: The Rust Foundation is doing a lot of work and research here. My colleague Adam, who works on the crates.io team, is working behind the scenes to answer some of these questions. For example: what kind of security testing can we do before we publish crates to make sure they are secure and don't contain malicious payloads? How do we surface this information? How do we tell a publisher that they included files that are not allowed? And from the user's perspective, when you visit crates.io, how can you judge how well maintained and how secure a crate is?

Jan David Nose: Those conversations are happening quite broadly in the ecosystem. On the Infra side we're far down the chain. Ultimately we integrate with whatever security scanning infrastructure crates.io builds. We don't have to do the security research ourselves, but we do have to support it.

Jan David Nose: There's still a lot that needs to happen. As awesome as Rust already is, and as much as I love using it, it's important to remember that we're still a very young ecosystem. Python is now very mature and stable, but it's more than 25 years old. Rust is about ten years old as a stable language. We still have a lot to learn and figure out.

Xander Cesari: Is the Rust ecosystem running into problems earlier than other languages because we're succeeding at being foundational software and Rust is used in places that are even more security-critical than other languages, so you have to hit these hard problems earlier than the Python world did?

Jan David Nose: I think that's true. Other ecosystems probably had more time to mature and answer these questions. We're operating on a more condensed timeline. There is also simply more happening now. Open source has been very successful; it's everywhere. That means there are more places where security is critical.

Jan David Nose: So this comes with the success of open source, with what is happening in the ecosystem at large, and with the industry we're in. It does mean we have less time to figure some things out. On the flip side, we also have less baggage. We have less technical debt and fifteen fewer years of accumulated history. That lets us be on the forefront in some areas, like how a package ecosystem can stay secure and what infrastructure a 21st century open source project needs.

Jan David Nose: Here I really want to call out the Rust Foundation. They actively support this work: hiring people like Marco and me to work full-time on infrastructure, having Walter and Adam focus heavily on security, and as an organization taking supply chain considerations very seriously. The Foundation also works with other ecosystems so we can learn and grow together and build a better industry.

Jan David Nose: Behind the scenes, colleagues constantly work to open doors for us as a relatively young language, so we can be part of those conversations and sit at the table with other ecosystems. That lets us learn from what others have already gone through and also help shape where things are going. Sustainability is a big part of that: how do we fund the project long term? How do we make sure we have the human resources and financial resources to run the infrastructure and support maintainers? I definitely underestimated how much of my job would be relationship management and budget planning, making sure credits last until new ones arrive.

Xander Cesari: Most open core business models give away the thing that doesn't cost much—the software—and charge for the thing that scales with use—the service. In Rust's case, it's all free, which is excellent for adoption, but it must require a very creative perspective on the business side.

Jan David Nose: Yeah, and that's where different forces pull in opposite directions. As an open source project, we want everyone to be able to use Rust for free. We want great user experience. When we talk about downloads, there are ways for us to make them much cheaper, but that might mean hosting everything in a single geographic location. Then everyone, including people in Australia, would have to download from, say, Europe, and their experience would get much worse.

Jan David Nose: Instead, we want to use services that are more expensive but provide a better experience for Rust users. There's a real tension there. On one side we want to do the best we can; on the other side we need to be realistic that this costs money.

Xander Cesari: I had been thinking of infrastructure as a binary: it either works or it doesn't. But you're right, it's a slider. You can pick how much money you want to spend and what quality of service you get. Are there new technologies coming, either for the Rust Infra Team or the packaging world in general, to help with these security problems? New sandboxing technologies or higher-level support?

Jan David Nose: A lot of people are working on this problem from different angles. Internally we've talked a lot about it, especially in the context of Crater. Crater pulls in all of those crates to build them and get feedback from the Rust compiler. That means if someone publishes malicious code, we will download it and build it.

Jan David Nose: In Rust this is a particular challenge because build scripts can essentially do anything on your machine. For us that means we need strong sandboxing. We've built our own sandboxing framework so every crate build runs in an isolated container, which prevents malicious code from escaping and messing with the host systems.

Jan David Nose: We feel that pain in Crater, but if we can solve it in a way that isn't exclusive to Crater—if it also protects user machines from the same vulnerabilities—that would be ideal. People like Walter on the Foundation side are actively working on that. I'm sure there are conversations in the Cargo and crates teams as well, because every team that deals with packages sees a different angle of the problem. We all have to come together to solve it, and there is a lot of interesting work happening in that area.

Xander Cesari: I hope help is coming.

Jan David Nose: I'm optimistic.

Xander Cesari: We have this exponential curve with traffic and everything else. It seems like at some point it has to taper off.

Jan David Nose: We'll see. Rust is a young language. I don't know when that growth will slow down. I think there's a good argument that it will continue for quite a while as adoption grows.

Jan David Nose: Being at a conference like RustConf, it's exciting to see how the mix of companies has changed over time. We had a talk from Rivian on how they use Rust in their cars. We've heard from other car manufacturers exploring it. Rust is getting into more and more applications that a few years ago would have been hard to imagine or where the language simply wasn't mature enough yet.

Jan David Nose: As that continues, I think we'll see new waves of growth that sustain the exponential curve we currently have, because we're moving into domains that are new for us. It's amazing to see who is talking about Rust and how they're using it, sometimes in areas like space that you wouldn't expect.

Jan David Nose: I'm very optimistic about Rust's future. With this increase in adoption, we'll see a lot of interesting lessons about how to use Rust and a lot of creative ideas from people building with it. With more corporate adoption, I also expect a new wave of investment into the ecosystem: companies paying people to work full-time on different parts of Rust, both in the ecosystem and in the core project. I'm very curious what the next ten years will look like, because I genuinely don't know.

Xander Cesari: The state of Rust right now does feel a bit like the dog that caught the car and now doesn't know what to do with it.

Jan David Nose: Yeah, I think that's a good analogy. Suddenly we're in a situation where we realize we haven't fully thought through every consequence of success. It's fascinating to see how the challenges change every year. We keep running into new growing pains where something that wasn't an issue a year ago suddenly becomes one because growth keeps going up.

Jan David Nose: We're constantly rebuilding parts of our infrastructure to keep up with that growth, and I don't see that stopping soon. As a user, that makes me very excited. With the language and the ecosystem growing at this pace, there are going to be very interesting things coming that I can't predict today.

Jan David Nose: For the project, it also means there are real challenges: financing the infrastructure we need, finding maintainers and contributors, and creating a healthy environment where people can work without burning out. There is a lot of work to be done, but it's an exciting place to be.

Xander Cesari: Well, thank you for all your work keeping those magic Cargo commands I can type into my terminal just working in the background. If there's any call to action from this interview, it's that if you're a company using Rust, maybe think about donating to keep the Infra Team working.

Jan David Nose: We always love new Rust Foundation members. Especially if you're a company, that's one of the best ways to support the work we do. Membership gives us a budget we can use either to fund people who work full-time on the project or to fill gaps in our infrastructure sponsorship where we don't get services for free and have to pay real money.

Jan David Nose: And if you're not a company, we're always looking for people to help out. The Infra Team has a lot of Rust-based bots and other areas where people can contribute relatively easily.

Xander Cesari: Small scoped bots that you can wrap your head around and help out with.

Jan David Nose: Exactly. It is a bit harder on the Infra side because we can't give people access to our cloud infrastructure. There are areas where it's simply not possible to contribute as a volunteer because you can't have access to the production systems. But there is still plenty of other work that can be done.

Jan David Nose: Like every other team in the project, we're a bit short-staffed. So when you're at conferences, come talk to me or Marco. We have work to do.

Xander Cesari: Well, thank you for doing the work that keeps Rust running.

Jan David Nose: I'm happy to.

Xander Cesari: Awesome. Thank you so much.

Firefox NightlyGetting Better Every Day – These Weeks in Firefox: Issue 192

Highlights

  • Collapsed tab group hover preview is going live in Firefox 145!
    • A collapsed Firefox tab group is hovered, showing a dropdown listing three tabs in a group labeled “Firefox stuff!” The results include “Download Firefox for Desktop — from Mozilla,” “Firefox browser features — Firefox” (currently open), and “Firefox - Wikipedia.”
  • Nicolas Chevobbe added a feature that collapses unreferenced CSS variables declarations in the Rules view (#1719461)
    • The Firefox Developer Tools Style Rules view showing a list of CSS rules applied from multiple stylesheets, including activity-stream.css, tokens-brand.css, and tokens-shared.css. Each rule is shown with its selector, and links to the line numbers in their respective stylesheets. Some rules include expandable boxes with messages similar to “Show 45 unused custom CSS properties,” indicating detection of unused variables or properties.
  • Alexandre Poirot [:ochameau] added a setting to enable automatic pretty printing in the Debugger (#1994128)
    • The Firefox Developer Tools Debugger settings menu is expanded. The settings gear icon is selected, displaying options such as “Disable JavaScript,” “Inline Variable Preview,” “Wrap Lines,” “Source Maps,” “Hide Ignored Sources,” “Ignore Known Third-party Scripts,” “Show paused overlay,” and “Automatic pretty printing,” with several options checked, and the last one hovered. A tooltip at the bottom says, “All sources in the debugger will be automatically pretty printed.”
  • Improved performance on pages making heavy usage of CSS variables
    • A table comparing performance improvements in selecting the body element across four websites. The table has three columns: “Before (ms),” “After (ms),” and “%.” For hh.ru, the time improved from 3000 ms to 400 ms (−86.67%). For pinterest, 640 ms to 140 ms (−78.13%). For bulma, 820 ms to 250 ms (−69.51%). For youtube, 250 ms to 100 ms (−60%). All percentage improvements are shown in bold. The header row is shaded blue, and the first column cells are shaded green.
  • Jared H added a “copy this profile” button to the app menu (bug 1992199)
    • The Firefox profile management menu with three options: “New profile” with a plus icon, “Copy this profile” with a duplicate icon (hovered), and “Manage profiles.”

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Khalid AlHaddad
  • Kyler Riggs [:kylr]

New contributors (🌟 = first patch)

  • Alex Stout
  • Khalid AlHaddad
  • Jim Gong
  • Mason Abbruzzese
  • PhuongNam
  • Thomas J Faughnan Jr
  • Mingyuan Zhao [:MagentaManifold]

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed an issue that was preventing dynamic import from resolving moz-extensions ES modules when called from content scripts attached to sandboxed sub frames – Bug 1988419
    • Thanks to Yoshi Cheng-Hao Huang from the Spidermonkey Team for looking into and fixing this issue hitting dynamic imports usage from content scripts
Addon Manager & about:addons
  • As a followup to the work to improve the extensions button panel’s empty states, starting from Nightly 146 Firefox Desktop will be showing a message bar notice in both the extensions button panel and about:addons to highlight to the users when Firefox is running in Troubleshoot mode (also known as Safe mode) and all add-ons are expected to be disabled, along with a “Learn more link” pointing the user to the SUMO page describing Troubleshoot mode in more details – Bug 1992983 / Bug 1994074 / Bug 1727828
    • Firefox Extensions panel showing a message stating, “All extensions have been disabled by Troubleshoot Mode.” Below the message is an illustration of a fox peeking through a cityscape made of puzzle pieces. A message beneath the image says, “You have extensions installed, but not enabled. Select ‘Manage extensions’ to manage them in settings.” A “Manage extensions” link is displayed at the bottom.

DevTools

WebDriver

Lint, Docs and Workflow

  • ESLint
    • We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
      • jsdoc/no-bad-blocks has now been enabled.
        • jsdoc comments are required to have two stars at the start, this will raise an issue if it looks like it should be a jsdoc comment (e.g. has an @ symbol) but only one star.
      • jsdoc/multiline-blocks has also been enabled.
        • This is being used mainly for layout consistency of multi-line comments, so that the text of the comment does not start on the first line, nor ends on the last line. This also helps with automatically fixing other rules.
  • StyleLint

Migration Improvements

New Tab Page

Performance Tools (aka Firefox Profiler)

  • Marker tooltips now have a ‘filter’ button to quickly filter the marker chart to similar markers:

Profile Management

  • Profiles is rolling out to all non-win10 users in 144, looking healthy so far
  • Niklas refactored the BackupService to support using it to copy profiles (bug 1992203)
  • Jared H added per-profile desktop shortcuts on Windows (bug 1958955), available via a toggle on the about:editprofile page
  • Dave fixed an intermittent test crash in debug builds (bug 1994849) caused by a race between deleting a directory and attempting to open a lock file. nsProfileLock::LockWithFcntl now returns a warning instead of an error in this case.

Search and Navigation

Storybook/Reusable Components/Acorn Design System

  • <moz-message-bar> now supports arbitrary content with slot=”message” elements
    • Ideally this is still something short, like a message as opposed to inputs, etc
    • <moz-message-bar><span slot=”message” data-l10n-id=”my-message”><a data-l10n-name=”link”></a></span></moz-message-bar>
    • Note: if you’re using Lit, @click listeners etc set on Fluent elements (data-l10n-name) won’t work, you’ll need to attach it to the data-l10n-id element or another parent

Niko MatsakisMove Expressions

This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.

TL;DR

The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression (“rvalue”) that desugars into a temporary value that is moved into the closure. So

|| something(&move($expr))

is roughly equivalent to something like:

{ 
    let tmp = $expr;
    || something(&{tmp})
}

How it would look in practice

Let’s go back to one of our running examples, the “Cloudflare example”, which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today – note the let _some_value = ... lines for dealing with captures:

// task:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
  	do_something_else_with(_some_a, _some_b, _some_c)
});

Under this proposal it would look something like this:

tokio::task::spawn(async {
    do_something_else_with(
        move(self.some_a.clone()),
        move(self.some_b.clone()),
        move(self.some_c.clone()),
    )
});

There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like

data_source_iter
    .inspect(|item| {
        inspect_item(item, move(tx.clone()).clone())
        //                      ----------  -------
        //                           |         |
        //                   move a clone      |
        //                   into the closure  |
        //                                     |
        //                             clone the clone
        //                             on each iteration
    })
    .collect();

// some code that uses `tx` later...

Credit for this idea

This idea is not mine. It’s been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it’s come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary’s proposal, like earlier proposals I’ve heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that’s the version I ran with here.

This proposal makes closures more “continuous”

The reason that I love the move variant of this proposal is that it makes closures more “continuous” and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:

A Rust closure captures the places you use in the “minimal way that it can” – so || vec.len() will capture a shared reference to the vec, || vec.push(22) will capture a mutable reference, and || drop(vec) will take ownership of the vector.

You can use move expressions to control exactly what is captured: so || move(vec).push(22) will move the vector into the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:

|| {
    let vec = move(input.vec); // take full ownership of vec
    let data = move(&cx.data); // take a reference to data
    let output_tx = move(output_tx); // take ownership of the output channel

    process(&vec, &mut output_tx, data)
}

As a shorthand, you can write move || at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match with move expressions to get more control. > So the previous closure might be written more concisely like so:

move || {
    process(&input.vec, &mut output_tx, move(&cx.data))
    //       ---------       ---------       --------      
    //           |               |               |         
    //           |               |       closure still  
    //           |               |       captures a ref
    //           |               |       `&cx.data`        
    //           |               |                         
    //       because of the `move` keyword on the clsoure,
    //       these two are captured "by move"
    //       
}

This proposal makes move “fit in” for me

It’s a bit ironic that I like this, because it’s doubling down on part of Rust’s design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:

To be honest, I don’t like the choice of move because it’s so operational. I think if I could go back, I would try to refashion our closures around two concepts

  • Attached closures (what we now call ||) would always be tied to the enclosing stack frame. They’d always have a lifetime even if they don’t capture anything.
  • Detached closures (what we now call move ||) would capture by-value, like move today.

I think this would help to build up the intuition of “use detach || if you are going to return the closure from the current stack frame and use || otherwise”.

move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don’t have “ref closures” and “move closures” – you just have closures that sometimes capture moves, and a “move” closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it’s quite elegant.

Why not suffix?

One question is whether a move expression should be a prefix or a postfix operator. So e.g.

|| something(&$expr.move)

instead of &move($expr).

My feeling is that it’s not a good fit for a postfix operator because it doesn’t just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:

|| process(foo(bar()).move)

When does bar() get called? If you think about it, it has to be closure creation time, but it’s not very “obvious”.

We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a “scope” of code ought to be prefix – though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.

Edit: I added this section after-the-fact in response to questions.

Conclusion

I’m going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust’s existing design. I love that. To me, it joins the set of “yep, we should clearly do that” pieces in this puzzle:

  • Add a Share trait (I’ve gone back to preferring the name share 😁)
  • Add move expressions

These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:

“low-level enough for a Kernel, usable enough for a GUI”

but they are moving in the right direction.

Tarek ZiadéWebNN is the future of browsers AI

For years, running machine learning in the browser meant juggling GPU support, WASM fallbacks, and flags. WebNN changes that by giving the web a standard inference API between JavaScript and hardware. It is the missing piece that turns the browser into a first-class AI client runtime.

Running AI locally is the long game. A decade from now laptops and phones will run much larger models natively, and the best experiences won’t require sending your data off to a cloud service. WebNN is how the web gets there.

What WebNN really is

WebNN is a W3C draft specification that exposes a graph-based neural network API to the web platform. Instead of binding directly to CUDA or Metal, browsers map WebNN calls to whatever native acceleration they have: DirectML on Windows, Core ML on macOS and iOS, NNAPI on Android, or a CPU path via TFLite/XNNPACK. When a CPU path exists, the browser can fall back there. Think of it as canvas for neural networks: you provide the graph, the browser picks the fastest safe backend.

WebNN as a graph converter

WebNN is a graph builder and validator. The browser takes the graph you define in JS, converts it into a static graph aimed at one of the underlying runtimes in the OS (DirectML, Core ML, NNAPI, TFLite/XNNPACK, or ONNX Runtime on newer Windows), and hands it to that native library. The heavy lifting lives there: compilation, scheduling, and kernel selection. WebNN is the portable contract that keeps your app code unchanged while the browser targets the best backend.

In Chromium, WebNN uses DirectML by default on Windows and can use the OS-shipped ONNX Runtime backend on Windows 11 24H2+, falling back to DirectML otherwise.

Why not “just use WebGPU”?

Libraries like ONNX Runtime Web and TF.js already use WebGPU to get more speed, but that means treating a graphics API as an inference runtime: writing shaders, managing bindings, and re-implementing scheduling. WebGPU is great for explicit control; WebNN is the spec we actually want for AI, with portable graphs, browser-managed backend choice, and no shader boilerplate.

Why this matters

  • Performance without flags: WebNN can route to GPU, NPU, or CPU without developers writing backend-specific code. That means near-native throughput for models like Whisper Tiny or Segment Anything, but delivered via a web page.
  • Predictable portability: The standard defines ops once; browsers own the mapping to the best hardware path they have. Apps no longer maintain separate WebGPU and WASM code paths.
  • Battery-aware: Because browsers control the scheduling and backend choice, they can pick energy-efficient accelerators over brute-force GPU usage on laptops or mobile.

The current state (and why it feels real now)

Chromium-based browsers ship WebNN behind a flag, and ONNX Runtime Web can use the WebNN execution provider when present. According to the public implementation status (webmachinelearning.github.io/webnn-status), the 95 ops in the spec are now covered across Core ML, Windows ML/DirectML, the WebNN execution provider for ONNX Runtime, and TFLite/XNNPACK (LiteRT) with only a handful still in flight. That’s enough to make real apps: speech commands, lightweight summarization, image segmentation, and style transfer.

The momentum is similar to what we saw with WebGPU two years ago: early adopters can ship progressive enhancements now, and the API will solidify while hardware vendors line up their drivers.

The big shift is that WebNN moves backend selection into the browser while keeping a high-level graph API. It is closer to Core ML or DirectML than to raw GPU programming.

Why I am bullish

The web wins by being portable and low friction. AI has been the missing capability that pushed teams toward native wrappers or cloud-heavy designs. WebNN gives us a standard, permissionless way to run meaningful AI locally in the browser while respecting energy and privacy constraints. It unlocks the boring path to mass adoption: no installs, instant upgrades, and enough abstraction that developers can stay focused on UX rather than driver matrices.

Now is the time to experiment, measure, and ship progressive AI features. The future of AI in browsers looks like WebNN.

The Servo BlogServo Sponsorship Tiers

The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:

  • Platinum: 10,000 USD/month
  • Gold: 5,000 USD/month
  • Silver: 1,000 USD/month
  • Bronze: 100 USD/month

Organizations and individual sponsors donating in these tiers will be acknowledged on the servo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be “no strings attached” donations. All the information about these new tiers is available at the Sponsorship page on this website.

Please contact us at join@servo.org if you are interested in sponsoring the project through one of these tiers.

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187.

Last, but not least, we’re excited to welcome our first bronze sponsor LambdaTest who has recently started donating to the Servo project. Thank you very much!

Mozilla Localization (L10N)Localizer spotlight: Robb

About You

My profile in Pontoon is robbp, but I go by Robb. I’m based in Romania and have been contributing to Mozilla localization since 2018 — first between 2018 and 2020, and now again after a break. I work mainly on Firefox (desktop and mobile), Thunderbird, AMO, and SUMO. When I’m not volunteering for open-source projects, I work as a professional translator in Romanian, English, and Italian.

Getting Started

Q: How did you first get interested in localization? Do you remember how you got involved in Mozilla localization?

A: I’ve used Thunderbird for many years, and I never changed the welcome screen. I’d always see that invitation to contribute somehow.

Back in 2018, I was using freeware only — including Thunderbird — and I started feeling guilty that I wasn’t giving back. I tried donating, but online payments seemed shady back then, and I thought a small, one-time donation wouldn’t make a difference.

Around the same time, my mother kept asking questions like, “What is this trying to do on my phone? I think they’re asking me something, but it’s in English!” My generation learned English from TV, Cartoon Network, and software, but when the internet reached the older generation, I realized how big of a problem language barriers could be. I wasn’t even aware that there was such a big wave of localizing everything seen on the internet. I was used to having it all in English (operating system, browser, e-mail client, etc.).

After translating for my mom for a year, I thought, why not volunteer to localize, too? Mozilla products were the first choice — Thunderbird was “in my face” all day, all night, telling me to go and localize. I literally just clicked the button on Thunderbird’s welcome page — that’s where it all started.

I had also tried contributing to other open-source projects, but Mozilla’s Pontoon just felt more natural to me. The interface is very close to the CAT tools I am used to.

Your Localization Journey

Q: What do you do professionally? How does that experience influence your Mozilla work and motivate you to contribute to open-source localization?

A: I’ve been a professional translator since 2012. I work in English, Romanian, and Italian — so yes, I type all the time.

In Pontoon, I treat the work as any professional project. I check for quality, consistency, and tone — just like I would for a client.

I was never a writer. I love translating. That’s why I became a translator (professionally). And here… I actually got more feedback here than in my professional translation projects. I think that’s why I stayed for so long, that’s why I came back.

It is a change of scenery when I don’t localize professionally, a long way from the texts I usually deal with. This is where I unwind, where I translate for the joy of translation, where I find my translator freedom.

Q: At what moment did you realize that your work really mattered?

A: When my mom stopped asking me what buttons to click! Now she just uses her phone in Romanian. I can’t help but smile when I see that. It makes me think I’m a tiny little part of that confidence she has now.

Community & Collaboration

Q: Since your return, Romanian coverage has risen from below 70% to above 90%. You translate, review suggestions, and comment on other contributors’ work. What helps you stay consistent and motivated?

A: I set small goals — I like seeing the completion percentage climb. I celebrate every time I hit a milestone, even if it’s just with a cup of coffee.

I didn’t realize it was such a big deal until the localization team pointed it out. It’s hard to see the bigger picture when you work in isolation. But it’s the same motivation that got me started and brought me back — you just need to find what makes you hum.

Q: Do you conduct product testing after you localize the strings or do you test them by being an active user? 

A: I’m an active user of both Firefox and Thunderbird — I use them daily and quite intensely. I also have Firefox Nightly installed in Romanian, and I like to explore it to see what’s changed and where. But I’ll admit, I’m not as thorough as I should be! Our locale manager gives me a heads-up about things to check which helps me stay on top of updates. I need to admit that the testing part is done by the team manager. He is actively monitoring everything that goes on in Pontoon and checks how strings in Pontoon land in the products and to the end users.

Q: How do you collaborate with other contributors and support new ones?

A: I’m more of an independent worker, but in Pontoon, I wanted to use the work that was already done by the “veterans” and see how I could fit in. We had email conversations over terms, their collaboration, their contributions, personal likes and dislikes etc. I think they actually did me a favor with the email conversations, given I am not active on any channels or social media and email was my only way of talking to them.

This year I started leaving comments in Pontoon — it’s such an easy way to communicate directly on specific strings. Given I was limited to emails until now, I think comments will help me reach out to other members of the team and start collaborating with them, too.

I keep in touch with the Romanian managers by email or Telegram. One of them helps me with technical terms, he helped get the Firefox project to 100% before the deadline. He contacts me with information on how to use options (I didn’t know about) in Pontoon and ideas on wording (after he tests and reviews strings). Collaboration doesn’t always mean meetings; sometimes it’s quiet cooperation over time.

Mentoring is a big word, but I’m willing for the willing. If someone reaches out, I’ll always try to help.

Q: Have you noticed improvements in Pontoon since 2020? How does it compare to professional tools you use, and what features do you wish it had?

A: It’s fast — and I love that.

There’s no clutter — and that’s a huge plus. Some of the “much-tooted” professional tools are overloaded with features and menus that slow you down instead of helping. Pontoon keeps things simple and focused.

I also appreciate being able to see translations in other languages. I often check the French and Italian versions, just to compare terms.

The comments section is another great feature — it makes collaboration quick and to the point, perfect for discussing terms or string-specific questions. Machine translation has also improved a lot across the board, and Pontoon is keeping pace.

As for things that could be better — I’d love to try the pre-translation feature, but I’ve noticed that some imported strings confirm the wrong suggestion out of several options. That’s when a good translation-memory cleanup becomes necessary. It would be helpful if experienced contributors could trim the TM, removing obsolete or outdated terms so new contributors won’t accidentally use them.

Pontoon sometimes lags when I move too quickly through strings — like when approving matches or applying term changes across projects. And, unlike professional CAT tools, it doesn’t automatically detect repeated strings or propagate translations for identical text. That’s a small but noticeable gap compared to professional tools.

Personal Reflections

Q: Professional translators often don’t engage in open-source projects because their work is paid elsewhere. What could attract more translators — especially women — to contribute?

A: It’s tricky. Translation is a profession, not a hobby, and people need to make a living.

But for me, working on open-source projects is something different — a way to learn new things, use different tools, and have a different mindset. Maybe if more translators saw it as a creative outlet instead of extra work, they’d give it a try.

Involvement in open source is a personal choice. First, one has to hear about it, understand it, and realize that the software they use for free is made by people — then decide they want to be part of that.

I don’t think it’s a women’s thing. Many come and many go. Maybe it’s just the thrill at the beginning. Some try, but maybe translation is not for them…

Q: What does contributing to Mozilla mean to you today?

A: It’s my way of giving back — and of helping people like my mom, who just want to understand new technology without fear or confusion. That thought makes me smile every time I open Firefox or Thunderbird.

Q: Any final words…

A: I look forward to more blogs featuring fellow contributors and learning and being inspired from their personal stories.

The Mozilla BlogRewiring Mozilla: Doing for AI what we did for the web

The Mozilla logo in green on a black background

AI isn’t just another tech trend — it’s at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks — and further concentrating power over how tech works in the hands of a few.

This leaves us with a choice: push the trajectory of AI in a direction that’s good for humanity — or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity. 

Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it’s currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital — everything people build on the internet — it’s imperative that we step in to shape where it goes. 

This post is the first in a series that will lay out Mozilla’s evolving strategy to do for AI what we did for the web.

What did we do for the web? 

Twenty five years ago, Microsoft Internet Explorer had 95% browser market share — controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft’s monopoly control of the web, and dropped Internet Explorer’s market share to 55% in just a few short years. 

The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer — and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era. 

How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto — values like privacy, openness and trust. And by gathering a global community of tens  of thousands — a rebel alliance of sorts — to build an alternative to the big tech behemoth of the time. 

What does success look like? 

This is what we intend to do again: grow an alliance of people, communities, companies who envision — and want to build — a different future for AI.

What does ‘different’ look like? There are millions of good answers to this question. If your native tongue isn’t a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it’s probably apps and services that become more useful and delightful as they add AI — and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice. 

Our task is to create a future for AI that is built around these values. We’ve started to rewire Mozilla to take on this task — and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework — a way to measure our progress against both mission and money: 

Double bottom lineIn the worldIn Mozilla
MissionEmpower people with tech that promotes agency and choice – make AI for and about people. Build AI that puts humanity first
100% of Mozilla orgs building AI that advances the Mozilla Manifesto.
MoneyDecentralize the tech industry – and create an tech ecosystem where the ‘people part’ of AI can flourishRadically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue.

Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit — and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla’s portfolio will design their strategies — and measure their success — against this double bottom line. 

What will we build? 

As we’ve rewired Mozilla, we’ve not only laid out a new strategy — we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things — real technology and products and services that start to carve a different path for AI.

While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:

Open source AI
— for developers
Public interest AI
— by and for communities
Trusted AI experiences
— for everyone 
Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI — and that enables people everywhere to build with AI on their own terms.Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won’t build it for them.Focus: create trusted AI-driven products that give people new ways to interact with the web — with user choice and openness as guiding principles.
Early examples: Mozilla.ai’s Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI.Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 — offering an opt-in way to choose models and add AI features in a browser you trust. 

The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment — and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction

We need to do this — together

These are the stakes: if we can’t push AI in a better direction, the internet — a place where 6 billion of us now spend much of our lives — will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI. 

For Mozilla, whether or not to tackle this challenge isn’t a question anymore. We need to do this. The question is: how? The high level strategy that I’ve laid out is our answer. It doesn’t prescribe all the details — but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things — and we know that we can’t do this alone.

Which means it’s incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well — and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow — and win — just as we did in the web era. 

You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI

The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Pro November 2025 Update

Welcome back to the latest update on our progress with Thunderbird Pro, a set of additional subscription services designed to enhance the email client you know, while providing a powerful open-source alternative to many of the big tech offerings available today. These services include Appointment, an easy to use scheduling tool; Send, which offers end-to-end encrypted file sharing; and Thundermail, an email service from the Thunderbird team. If you’d like more information on the broader details of each service and the road to getting here you can read our past series of updates here. Do you want to receive these and other updates and be the first to know when Thunderbird Pro is available? Be sure to sign up for the waitlist.

With that said, here’s how progress has shaped up on Thunderbird Pro since the last update.

Current Progress

Thundermail

It took a lot of work to get here, but Thundermail accounts are now in production testing. Internal testing with our own team members has begun, ensuring everything is in place for support and onboarding of the Early Bird wave of users. On the visual side, we’ve implemented improved designs for the new Thundermail dashboard, where users can view and edit their settings, including adding custom domains and aliases. 

The new Thunderbird Pro add-on now features support for Thundermail, which will allow future users who sign-up through the add-on to automatically add their Thundermail account in Thunderbird. Work to boost infrastructure and security has also continued, and we’ve migrated our data hosting from the Americas to Germany and the EU where possible. We’ve also been improving our email delivery to reduce the chances of Thundermail messages landing in spam folders.

Appointment

The team has been busy with design work, getting Zoom and CalDAV better integrated, and addressing workflow, infrastructure, and bugs. Appointment received a major visual update in the past few months, which is being applied across all of Thunderbird Pro. While some of these updates have already been implemented, there’s still lots of remodelling happening and under discussion – all in preparation for the Early Bird beta release.

Send

One of the main focuses for Send has been migrating it from its own add-on to the new Thunderbird Pro add-on, which will make using it in Thunderbird desktop much smoother. Progress continues on improving file safety through better reporting and prevention of illegal uploads. Our security review is now complete, with an external assessor validating all issues scheduled for fixing and once finalized, this report will be shared publicly with our community. Finally, we’ve refined the Send user experience by optimizing mobile performance, improving upload and download speeds, enhancing the first-time user flow, and much more.

Bringing it all together

Our new Thunderbird Pro website is now live, marking a major milestone in bringing the project to life. The website offers more details about Thunderbird Pro and serves as the first step for users to sign up, sign in and access their accounts. 


Our initial subscription tier, the Early Bird Plan, priced at $9 per month, will include all three services: Thundermail, Send, and Appointment. Email hosting, file storage, and the security behind all of this come at a cost, and Thunderbird Pro will never be funded by selling user data, showing ads, or compromising its independence. This introductory rate directly supports Thunderbird Pro’s early development and growth, positioning it for long-term sustainability. We will also be actively listening to your feedback and reviewing the pricing and plans we offer. Once the rough edges are smoothed out and we’re ready to open the doors to everyone, we plan to introduce additional tiers to better meet the needs of all our users.

What’s next

Thunderbird Pro is now awaiting its initial closed test run which will include a core group of community contributors. This group will help conduct a broader test and identify critical issues before we gradually open Early Bird access to our waitlist subscribers in waves. While these services will still be considered under active development, with your help this early release will continue to test and refine them for all future users.
Be sure you sign up for our Early Bird waitlist at tb.pro and help us shape the future of Thunderbird Pro. See you soon!

The post Thunderbird Pro November 2025 Update appeared first on The Thunderbird Blog.

Nick FitzgeraldA Function Inliner for Wasmtime and Cranelift

Note: I cross-posted this to the Bytecode Alliance blog.

Function inlining is one of the most important compiler optimizations, not because of its direct effects, but because of the follow-up optimizations it unlocks. It may reveal, for example, that an otherwise-unknown function parameter value is bound to a constant argument, which makes a conditional branch unconditional, which in turn exposes that the function will always return the same value. Inlining is the catalyst of modern compiler optimization.

Wasmtime is a WebAssembly runtime that focuses on safety and fast Wasm execution. But despite that focus on speed, Wasmtime has historically chosen not to perform inlining in its optimizing compiler backend, Cranelift. There were two reasons for this surprising decision: first, Cranelift is a per-function compiler designed such that Wasmtime can compile all of a Wasm module’s functions in parallel. Inlining is inter-procedural and requires synchronization between function compilations; that synchronization reduces parallelism. Second, Wasm modules are generally produced by an optimizing toolchain, like LLVM, that already did all the beneficial inlining. Any calls remaining in the module will not benefit from inlining — perhaps they are on slow paths marked [[unlikely]] or the callee is annotated with #[inline(never)]. But WebAssembly’s component model changes this calculus.

With the component model, developers can compose multiple Wasm modules — each produced by different toolchains — into a single program. Those toolchains only had a local view of the call graph, limited to their own module, and they couldn’t see cross-module or fused adapter function definitions. None of them, therefore, had an opportunity to inline calls to such functions. Only the Wasm runtime’s compiler, which has the final, complete call graph and function definitions in hand, has that opportunity.

Therefore we implemented function inlining in Wasmtime and Cranelift. Its initial implementation landed in Wasmtime version 36, however, it remains off-by-default and is still baking. You can test it out via the -C inlining=y command-line flag or the wasmtime::Config::compiler_inlining method. The rest of this article describes function inlining in more detail, digs into the guts of our implementation and rationale for its design choices, and finally looks at some early performance results.

Function Inlining

Function inlining is a compiler optimization where a call to a function f is replaced by a copy of f’s body. This removes function call overheads (spilling caller-save registers, setting up the call frame, etc…) which can be beneficial on its own. But inlining’s main benefits are indirect: it enables subsequent optimization of f’s body in the context of the call site. That context is important — a parameter’s previously unknown value might be bound to a constant argument and exposing that to the optimizer might cascade into a large code clean up.

Consider the following example, where function g calls function f:

fn f(x: u32) -> bool {
    return x < u32::MAX / 2;
}

fn g() -> u32 {
    let a = 42;
    if f(a) {
        return a;
    } else {
        return 0;
    }
}

After inlining the call to f, function g looks something like this:

fn g() -> u32 {
    let a = 42;

    let x = a;
    let f_result = x < u32::MAX / 2;

    if f_result {
        return a;
    } else {
        return 0;
    }
}

Now the whole subexpression that defines f_result only depends on constant values, so the optimizer can replace that subexpression with its known value:

fn g() -> u32 {
    let a = 42;

    let f_result = true;
    if f_result {
        return a;
    } else {
        return 0;
    }
}

This reveals that the if-else conditional will, in fact, unconditionally transfer control to the consequent, and g can be simplified into the following:

fn g() -> u32 {
    let a = 42;
    return a;
}

In isolation, inlining f was a marginal transformation. When considered holistically, however, it unlocked a plethora of subsequent simplifications that ultimately led to g returning a constant value rather than computing anything at run-time.

Implementation

Cranelift’s unit of compilation is a single function, which Wasmtime leverages to compile each function in a Wasm module in parallel, speeding up compile times on multi-core systems. But inlining a function at a particular call site requires that function’s definition, which implies parallelism-hurting synchronization or some other compromise, like additional read-only copies of function bodies. So this was the first goal of our implementation: to preserve as much parallelism as possible.

Additionally, although Cranelift is primarily developed for Wasmtime by Wasmtime’s developers, it is independent from Wasmtime. It is a reusable library and is reused, for example, by the Rust project as an alternative backend for rustc. But a large part of inlining, in practice, are the heuristics for deciding when inlining a call is likely beneficial, and those heuristics can be domain specific. Wasmtime generally wants to leave most calls out-of-line, inlining only cross-module calls, while rustc wants something much more aggressive to boil away its Iterator combinators and the like. So our second implementation goal was to separate how we inline a function call from the decision of whether to inline that call.

These goals led us to a layered design where Cranelift has an optional inlining pass, but the Cranelift embedder (e.g. Wasmtime) must provide a callback to it. The inlining pass invokes the callback for each call site, the callback returns a command of either “leave the call as-is” or “here is a function body, replace the call with it”. Cranelift is responsible for the inlining transformation and the embedder is responsible for deciding whether to inline a function call and, if so, getting that function’s body (along with whatever synchronization that requires).

The mechanics of the inlining transformation — wiring arguments to parameters, renaming values, and copying instructions and basic blocks into the caller — are, well, mechanical. Cranelift makes extensive uses of arenas for various entities in its IR, and we begin by appending the callee’s arenas to the caller’s arenas, renaming entity references from the callee’s arena indices to their new indices in the caller’s arenas as we do so. Next we copy the callee’s block layout into the caller and replace the original call instruction with a jump to the caller’s inlined version of the callee’s entry block. Cranelift uses block parameters, rather than phi nodes, so the call arguments simply become jump arguments. Finally, we translate each instruction from the callee into the caller. This is done via a pre-order traversal to ensure that we process value definitions before value uses, simplifying instruction operand rewriting. The changes to Wasmtime’s compilation orchestration are more interesting.

The following pseudocode describes Wasmtime’s compilation orchestration before Cranelift gained an inlining pass and also when inlining is disabled:

// Compile each function in parallel.
let objects = parallel map for func in wasm.functions {
    compile(func)
};

// Combine the functions into one region of executable memory, resolving
// relocations by mapping function references to PC-relative offsets.
return link(objects)

The naive way to update that process to use Cranelift’s inlining pass might look something like this:

// Optionally perform some pre-inlining optimizations in parallel.
parallel for func in wasm.functions {
    pre_optimize(func);
}

// Do inlining sequentially.
for func in wasm.functions {
    func.inline(|f| if should_inline(f) {
        Some(wasm.functions[f])
    } else {
        None
    })
}

// And then proceed as before.
let objects = parallel map for func in wasm.functions {
    compile(func)
};
return link(objects)

Inlining is performed sequentially, rather than in parallel, which is a bummer. But if we tried to make that loop parallel by logically running each function’s inlining pass in its own thread, then a callee function we are inlining might or might not have had its transitive function calls inlined already depending on the whims of the scheduler. That leads to non-deterministic output, and our compilation must be deterministic, so it’s a non-starter.1 But whether a function has already had transitive inlining done or not leads to another problem.

With this naive approach, we are either limited to one layer of inlining or else potentially duplicating inlining effort, repeatedly inlining e into f each time we inline f into g, h, and i. This is because f may come before or after g in our wasm.functions list. We would prefer it if f already contained e and was already optimized accordingly, so that every caller of f didn’t have to redo that same work when inlining calls to f.

This suggests we should topologically sort our functions based on their call graph, so that we inline in a bottom-up manner, from leaf functions (those that do not call any others) towards root functions (those that are not called by any others, typically main and other top-level exported functions). Given a topological sort, we know that whenever we are inlining f into g either (a) f has already had its own inlining done or (b) f and g participate in a cycle. Case (a) is ideal: we aren’t repeating any work because it’s already been done. Case (b), when we find cycles, means that f and g are mutually recursive. We cannot fully inline recursive calls in general (just as you cannot fully unroll a loop in general) so we will simply avoid inlining these calls.2 So topological sort avoids repeating work, but our inlining phase is still sequential.

At the heart of our proposed topological sort is a call graph traversal that visits callees before callers. To parallelize inlining, you could imagine that, while traversing the call graph, we track how many still-uninlined callees each caller function has. Then we batch all functions whose associated counts are currently zero (i.e. they aren’t waiting on anything else to be inlined first) into a layer and process them in parallel. Next, we decrement each of their callers’ counts and collect the next layer of ready-to-go functions, continuing until all functions have been processed.

let call_graph = CallGraph::new(wasm.functions);

let counts = { f: call_graph.num_callees_of(f) for f in wasm.functions };

let layer = [ f for f in wasm.functions if counts[f] == 0 ];
while layer is not empty {
    parallel for func in layer {
        func.inline(...);
    }

    let next_layer = [];
    for func in layer {
        for caller in call_graph.callers_of(func) {
            counts[caller] -= 1;
            if counts[caller] == 0 {
                next_layer.push(caller)
            }
        }
    }
    layer = next_layer;
}

This algorithm will leverage available parallelism, and it avoids repeating work via the same dependency-based scheduling that topological sorting did, but it has a flaw. It will not terminate when it encounters recursion cycles in the call graph. If function f calls function g which also calls f, for example, then it will not schedule either of them into a layer because they are both waiting for the other to be processed first. One way we can avoid this problem is by avoiding cycles.

If you partition a graph’s nodes into disjoint sets, where each set contains every node reachable from every other node in that set, you get that graph’s strongly-connected components (SCCs). If a node does not participate in a cycle, then it will be in its own singleton SCC. The members of a cycle, on the other hand, will all be grouped into the same SCC, since those nodes are all reachable from each other.

In the following example, the dotted boxes designate the graph’s SCCs:

Ignoring edges between nodes within the same SCC, and only considering edges across SCCs, gives us the graph’s condensation. The condensation is always acyclic, because the original graph’s cycles are “hidden” within the SCCs.

Here is the condensation of the previous example:

We can adapt our parallel-inlining algorithm to operate on strongly-connected components, and now it will correctly terminate because we’ve removed all cycles. First, we find the call graph’s SCCs and create the reverse (or transpose) condensation, where an edge a→b is flipped to b→a. We do this because we will query this graph to find the callers of a given function f, not the functions that f calls. I am not aware of an existing name for the reverse condensation, so, at Chris Fallin’s brilliant suggestion, I have decided to call it an evaporation. From there, the algorithm largely remains as it was before, although we keep track of counts and layers by SCC rather than by function.

let call_graph = CallGraph::new(wasm.functions);
let components = StronglyConnectedComponents::new(call_graph);
let evaoporation = Evaporation::new(components);

let counts = { c: evaporation.num_callees_of(c) for c in components };

let layer = [ c for c in components if counts[c] == 0 ];
while layer is not empty {
    parallel for func in scc in layer {
        func.inline(...);
    }

    let next_layer = [];
    for scc in layer {
        for caller_scc in evaporation.callers_of(scc) {
            counts[caller_scc] -= 1;
            if counts[caller_scc] == 0 {
                next_layer.push(caller_scc);
            }
        }
    }
    layer = next_layer;
}

This is the algorithm we use in Wasmtime, modulo minor tweaks here and there to engineer some data structures and combine some loops. After parallel inlining, the rest of the compiler pipeline continues in parallel for each function, yielding unlinked machine code. Finally, we link all that together and resolve relocations, same as we did previously.

Heuristics are the only implementation detail left to discuss, but there isn’t much to say that hasn’t already been said. Wasmtime prefers not to inline calls within the same Wasm module, while cross-module calls are a strong hint that we should consider inlining. Beyond that, our heuristics are extremely naive at the moment, and only consider the code sizes of the caller and callee functions. There is a lot of room for improvement here, and we intend to make those improvements on-demand as people start playing with the inliner. For example, there are many things we don’t consider in our heuristics today, but possibly should:

  • Hints from WebAssembly’s compilation-hints proposal
  • The number of edges to a callee function in the call graph
  • Whether any of a call’s arguments are constants
  • Whether the call is inside a loop or a block marked as “cold”
  • Etc…

Some Initial Results

The speed up you get (or don’t get) from enabling inlining is going to vary from program to program. Here are a couple synthetic benchmarks.

First, let’s investigate the simplest case possible, a cross-module call of an empty function in a loop:

(component
  ;; Define one module, exporting an empty function `f`.
  (core module $M
    (func (export "f")
      nop
    )
  )

  ;; Define another module, importing `f`, and exporting a function
  ;; that calls `f` in a loop.
  (core module $N
    (import "m" "f" (func $f))
    (func (export "g") (param $counter i32)
      (loop $loop
        ;; When counter is zero, return.
        (if (i32.eq (local.get $counter) (i32.const 0))
          (then (return)))
        ;; Do our cross-module call.
        (call $f)
        ;; Decrement the counter and continue to the next iteration
        ;; of the loop.
        (local.set $counter (i32.sub (local.get $counter)
                                     (i32.const 1)))
        (br $loop))
    )
  )

  ;; Instantiate and link our modules.
  (core instance $m (instantiate $M))
  (core instance $n (instantiate $N (with "m" (instance $m))))

  ;; Lift and export the looping function.
  (func (export "g") (param "n" u32)
    (canon lift (core func $n "g"))
  )
)

We can inspect the machine code that this compiles down to via the wasmtime compile and wasmtime objdump commands. Let’s focus only on the looping function. Without inlining, we see a loop around a call, as we would expect:

00000020 wasm[1]::function[1]:
        ;; Function prologue.
        20: pushq   %rbp
        21: movq    %rsp, %rbp

        ;; Check for stack overflow.
        24: movq    8(%rdi), %r10
        28: movq    0x10(%r10), %r10
        2c: addq    $0x30, %r10
        30: cmpq    %rsp, %r10
        33: ja      0x89

        ;; Allocate this function's stack frame, save callee-save
        ;; registers, and shuffle some registers.
        39: subq    $0x20, %rsp
        3d: movq    %rbx, (%rsp)
        41: movq    %r14, 8(%rsp)
        46: movq    %r15, 0x10(%rsp)
        4b: movq    0x40(%rdi), %rbx
        4f: movq    %rdi, %r15
        52: movq    %rdx, %r14

        ;; Begin loop.
        ;;
        ;; Test our counter for zero and break out if so.
        55: testl   %r14d, %r14d
        58: je      0x72
        ;; Do our cross-module call.
        5e: movq    %r15, %rsi
        61: movq    %rbx, %rdi
        64: callq   0
        ;; Decrement our counter.
        69: subl    $1, %r14d
        ;; Continue to the next iteration of the loop.
        6d: jmp     0x55

        ;; Function epilogue: restore callee-save registers and
        ;; deallocate this functions's stack frame.
        72: movq    (%rsp), %rbx
        76: movq    8(%rsp), %r14
        7b: movq    0x10(%rsp), %r15
        80: addq    $0x20, %rsp
        84: movq    %rbp, %rsp
        87: popq    %rbp
        88: retq

        ;; Out-of-line traps.
        89: ud2
            ╰─╼ trap: StackOverflow

When we enable inlining, then M::f gets inlined into N::g. Despite N::g becoming a leaf function, we will still push %rbp and all that in the prologue and pop it in the epilogue, because Wasmtime always enables frame pointers. But because it no longer needs to shuffle values into ABI argument registers or allocate any stack space, it doesn’t need to do any explicit stack checks, and nearly all the rest of the code also goes away. All that is left is a loop decrementing a counter to zero:3

00000020 wasm[1]::function[1]:
        ;; Function prologue.
        20: pushq   %rbp
        21: movq    %rsp, %rbp

        ;; Loop.
        24: testl   %edx, %edx
        26: je      0x34
        2c: subl    $1, %edx
        2f: jmp     0x24

        ;; Function epilogue.
        34: movq    %rbp, %rsp
        37: popq    %rbp
        38: retq

With this simplest of examples, we can just count the difference in number of instructions in each loop body:

  • 12 without inlining (7 in N::g and 5 in M::f which are 2 to push the frame pointer, 2 to pop it, and 1 to return)
  • 4 with inlining

But we might as well verify that the inlined version really is faster via some quick-and-dirty benchmarking with hyperfine. This won’t measure only Wasm execution time, it also measures spawning a whole Wasmtime process, loading code from disk, etc…, but it will work for our purposes if we crank up the number of iterations:

$ hyperfine \
    "wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm" \
    "wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm"

Benchmark 1: wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm
  Time (mean ± σ):     138.2 ms ±   9.6 ms    [User: 132.7 ms, System: 6.7 ms]
  Range (min … max):   128.7 ms … 167.7 ms    19 runs

Benchmark 2: wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm
  Time (mean ± σ):      37.5 ms ±   1.1 ms    [User: 33.0 ms, System: 5.8 ms]
  Range (min … max):    35.7 ms …  40.8 ms    77 runs

Summary
  'wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm' ran
    3.69 ± 0.28 times faster than 'wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm'

Okay so if we measure Wasm doing almost nothing but empty function calls and then we measure again after removing function call overhead, we get a big speed up — it would be disappointing if we didn’t! But maybe we can benchmark something a tiny bit more realistic.

A program that we commonly reach for when benchmarking is a small wrapper around the pulldown-cmark markdown library that parses the CommonMark specification (which is itself written in markdown) and renders that to HTML. This is Real World™ code operating on Real World™ inputs that matches Real World™ use cases people have for Wasm. That is, good benchmarking is incredibly difficult, but this program is nonetheless a pretty good candidate for inclusion in our corpus. There’s just one hiccup: in order for our inliner to activate normally, we need a program using components and making cross-module calls, and this program doesn’t do that. But we don’t have a good corpus of such benchmarks yet because this kind of component composition is still relatively new, so let’s keep using our pulldown-cmark program but measure our inliner’s effects via a more circuitous route.

Wasmtime has tunables to enable the inlining of intra-module calls4 and rustc and LLVM have tunables for disabling inlining5. Therefore we can roughly estimate the speed ups our inliner might unlock on a similar, but extensively componentized and cross-module calling, program by:

  • Disabling inlining when compiling the Rust source code to Wasm

  • Compiling the resulting Wasm binary to native code with Wasmtime twice: once with inlining disabled, and once with intra-module call inlining enabled

  • Comparing those two different compilations’ execution speeds

Running this experiment with Sightglass, our internal benchmarking infrastructure and tooling, yields the following results:

execution :: instructions-retired :: pulldown-cmark.wasm

  Δ = 7329995.35 ± 2.47 (confidence = 99%)

  with-inlining is 1.26x to 1.26x faster than without-inlining!

  [35729153 35729164.72 35729173] without-inlining
  [28399156 28399169.37 28399179] with-inlining

Conclusion

Wasmtime and Cranelift now have a function inliner! Test it out via the -C inlining=y command-line flag or via the wasmtime::Config::compiler_inlining method. Let us know if you run into any bugs or whether you see any speed-ups when running Wasm components containing multiple core modules.

Thanks to Chris Fallin and Graydon Hoare for reading early drafts of this piece and providing valuable feedback. Any errors that remain are my own.


  1. Deterministic compilation gives a number of benefits: testing is easier, debugging is easier, builds can be byte-for-byte reproducible, it is well-behaved in the face of incremental compilation and fine-grained caching, etc… 

  2. For what it is worth, this still allows collapsing chains of mutually-recursive calls (a calls b calls c calls a) into a single, self-recursive call (abc calls abc). Our actual implementation does not do this in practice, preferring additional parallelism instead, but it could in theory. 

  3. Cranelift cannot currently remove loops without side effects, and generally doesn’t mess with control-flow at all in its mid-end. We’ve had various discussions about how we might best fit control-flow-y optimizations into Cranelift’s mid-end architecture over the years, but it also isn’t something that we’ve seen would be very beneficial for actual, Real World™ Wasm programs, given that (a) LLVM has already done much of this kind of thing when producing the Wasm, and (b) we do some branch-folding when lowering from our mid-level IR to our machine-specific IR. Maybe we will revisit this sometime in the future if it crops up more often after inlining. 

  4. -C cranelift-wasmtime-inlining-intra-module=yes 

  5. -Cllvm-args=--inline-threshold=0, -Cllvm-args=--inlinehint-threshold=0, and -Zinline-mir=no 

Mozilla ThunderbirdThunderbird Adds Native Microsoft Exchange Email Support

If your organization uses Microsoft Exchange-based email, you’ll be happy to hear that Thunderbird’s latest monthly Release version 145, now officially supports native access via the Exchange Web Services (EWS) protocol. With EWS now built directly into Thunderbird, a third-party add-on is no longer required for email functionality. Calendar and address book support for Exchange accounts remain on the roadmap, but email integration is here and ready to use!

What changes for Thunderbird users

Until now, Thunderbird users in Exchange hosted environments often relied on IMAP/POP protocols or third-party extensions. With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more. This simplifies life for users who depend on Exchange for email but prefer Thunderbird as their client.

How to get started

For many people switching from Outlook to Thunderbird, the most common setup involves Microsoft-hosted Exchange accounts such as Microsoft 365 or Office 365. Thunderbird now uses Microsoft’s standard sign-in process (OAuth2) and automatically detects your account settings, so you can start using your email right away without any extra setup.

If this applies to you, setup is straightforward:

  1. Create a new account in Thunderbird 145 or newer.
  2. In the new Account Hub, select Exchange (or Exchange Web Services in legacy setup).
  3. Let Thunderbird handle the rest!

Important note: If you see something different, or need more details or advice, please see our support page and wiki page. Also, some authentication configurations are not supported yet and you may need to wait for a further update that expands compatibility, please refer to the table below for more details. 

What functionality is supported now and what’s coming soon

As mentioned earlier, EWS support in version 145 currently enables email functionality only. Calendar and address book integration are in active development and will be added in future releases. The chart below provides an at-a-glance view of what’s supported today.

Feature areaSupported nowNot yet supported
Email – account setup & folder access✅ Creating accounts via auto-config with EWS, server-side folder manipulation 
Email – message operations✅ Viewing messages, sending, replying/forwarding, moving/copying/deleting
Email – attachments✅ Attachments can be saved and displayed with detach/delete support.
Search & filtering✅ Search subject and body, quick filtering❌ Filter actions requiring full body content are not yet supported.
Accounts hosted on Microsoft 365✅ Domains using the standard Microsoft OAuth2 endpoint❌ Domains requiring custom OAuth2 application and tenant IDs will be supported in the future.
Accounts hosted on-premise✅ Password-based Basic authentication❌ Password-based NTLM authentication and OAuth2 for on-premise servers are on the roadmap.
Calendar support❌ Not yet implemented – calendar syncing is on the roadmap.
Address book / contacts support❌ Not yet implemented – address book support is on the roadmap.
Microsoft Graph support❌ Not yet implemented – Microsoft Graph integration will be added in the future.

Exchange Web Services and Microsoft Graph

While many people and organizations still rely on Exchange Web Services (EWS), Microsoft has begun gradually phasing it out in favor of a newer, more modern interface called Microsoft Graph. Microsoft has stated that EWS will continue to be supported for the foreseeable future, but over time, Microsoft Graph will become the primary way to connect to Microsoft 365 services.

Because EWS remains widely used today, we wanted to ensure full support for it first to ensure compatibility for existing users. At the same time, we’re actively working to add support for Microsoft Graph, so Thunderbird will be ready as Microsoft transitions to its new standard.

Looking ahead

While Exchange email is available now, calendar and address book integration is on the way, bringing Thunderbird closer to being a complete solution for Exchange users. For many people, having reliable email access is the most important step, but if you depend on calendar and contact synchronization, we’re working hard to bring this to Thunderbird in the near future, making Thunderbird a strong alternative to Outlook.

Keep an eye on future releases for additional support and integrations, but in the meantime, enjoy a smoother Exchange email experience within your favorite email client!


If you want to know more about Exchange support in Thunderbird, please refer to the dedicated page on support.mozilla.org. Organization admins can also find out more on the Mozilla wiki page. To follow ongoing and future work in this area, please refer to the relevant meta-bug on Bugzilla.

The post Thunderbird Adds Native Microsoft Exchange Email Support appeared first on The Thunderbird Blog.

The Mozilla BlogFirefox tab groups just got an upgrade, thanks to your feedback

Firefox tab grouping with cursor selecting “Recipes” and a dropdown list; “Paris Trip” group visible

Tab groups have become one of Firefox’s most loved ways to stay organized — over 18 million people have used the feature since it launched earlier this year. Since then, we’ve been listening closely to feedback from the Mozilla Connect community to make this long-awaited feature even more helpful.

We’ve just concluded a round of highly requested tab groups updates that make it easier than ever to stay focused, organized, and productive. Check out what we’ve been up to, and if you haven’t tried tab groups yet, here’s a helpful starting guide. 

Preview tab group contents on hover

Starting in Firefox 145, you can peek inside a group without expanding it. Whether you’re checking a stash of tabs set aside for deep research or quickly scanning a group to find the right meeting notes doc, hover previews give you the context you need — instantly.

Keep the active tab visible in a collapsed group — and drag tabs into it

Since Firefox 142, when you collapse a group, the tab you’re working in remains visible. It’s a small but mighty improvement that reduces interruptions. And, starting in Firefox 143, you can drag a tab directly into a collapsed group without expanding it. It’s a quick, intuitive way to stay organized while reducing on-screen clutter.

Each of these ideas came from your feedback on Mozilla Connect. We’re grateful for your engagement, creativity, and patience as our team works to improve Tab Groups.

What’s next for tab groups

We’ve got a big, healthy stash of great ideas and suggestions to explore, but we’d love to hear more from you on two areas of long-term interest: 

  • Improving the usefulness and ease of use of saved tab groups. We’re curious how you’re using them and how we can make the experience more helpful to you. What benefits do they bring to your workflow compared to bookmarks? 
  • Workspaces. Some of you have requested a way to separate contexts by creating workspaces — sets of tabs and tab groups that are entirely isolated from each other, yet remain available within a single browser window. We are curious about your workspace use cases and where context separation via window management or profiles doesn’t meet your workflow needs. Is collaboration an important feature of the workspaces for you? 

Have ideas and suggestions? Let us know in this Mozilla Connect thread!

Take Firefox with you

Download Firefox Mobile

The post Firefox tab groups just got an upgrade, thanks to your feedback appeared first on The Mozilla Blog.

Mozilla ThunderbirdVIDEO: An Android Retrospective

If you can believe it, Thunderbird for Android has been out for just over a year! In this episode of our Community Office Hours, Heather and Monica check back in with the mobile team after our chat with them back in January. Sr. Software Engineer Wolf Montwé and our new Manager of Mobile Apps, Jon Bott look back at what the growing mobile team has been able to accomplish this last year, what we’re still working on, and what’s up ahead. 

We’ll be back next month, talking with members of the desktop team all about Exchange support landing in Thunderbird 145!

Thunderbird for Android: One Year Later

The biggest visual change to the app since last year is the new Account Drawer. The mobile team wants to help users easily tell their accounts apart and switch between them. While this is still a work in progress, we’ve started making these changes in Thunderbird 11.0. We know not everyone is excited about UI changes, but we hope most users like these initial changes! 

Another major but hidden change involves updating our very old code, which came from K-9 Mail. Much of the K-9 code goes back to 2009! Having to work with old code explains why some fixes or new features, which should be simple, turn out to be complex and time consuming. Changes end up affecting more components than we expect, which cause delivery timelines to change from a week to months. 

We are also still working to proactively eliminate tech debt, which will make the code more reliable and secure, plus allow future improvements and feature additions to be done more quickly. Even though the team didn’t eliminate as much tech debt as they planned, they feel the work they’ve done this year will help reduce even more next year.

Over this past year, the team has also realized Thunderbird for Android users have different needs from K-9 Mail users. Thunderbird desktop users want more features from the desktop app, and this is definitely a major goal we have for our future development. The current feature gap won’t always be here!

Recently, the mobile team has started moving to a monthly release cadence, similar to Firefox and the monthly Thunderbird channel. Changing from bi-monthly to monthly reduces the risks of changing huge amounts of code all at once. The team can make more incremental changes, like the account drawer, in a smaller window. Regular, “bite size” changes allow us to have more conversation with the community. The development team also benefits because they can make better timelines and can more accurately predict the amount of  work needed to ship future releases.

A Growing Team and Community

Since we released the Android app, the mobile team and contributor community has grown! One of the unexpected benefits of growing the team and community has been improved documentation. Documentation makes things visible for our talented engineers and existing volunteers, and makes it easier for newcomers to join the project!

Our volunteers have made some incredible contributions to the app! Translators have not only bolstered popular languages like German and French, but have enabled previously unsupported languages. In addition to localization, community members have helped develop the app. Shamin-emon has taken on complicated changes, and has been very patient when some of his proposed changes were delayed. Arnt, another community member, debugged and patched an issue with utf-8 strings in IMAP. And Platform34 triaged numerous issues to give developers insights into reported bugs.

Finally, we’re learning how to balance refactoring and improving an Android app, and at the same time building an iOS app from scratch! Both apps are important, but the team has had to think about what’s most important in each app. Android development is focusing on prioritizing top bugs and splitting the work to fix them into bite size pieces. With iOS, the team can develop in small increments from the start. Fortunately, the growing team and engaged community is making this balancing act easier than it would have been a year ago.

Looking Forward

In the next year, what can Android users look forward to? At the top of the priority list is better architecture leading to a better user experience, along with view and Message List improvements, HTML signatures, and JMAP support. For the iOS app, the team is focused on getting basic functionality like place, such as reading and writing mail, attachments, and work on the JMAP and IMAP protocols.

VIDEO (Also on Peertube):

Listen to the Episode

The post VIDEO: An Android Retrospective appeared first on The Thunderbird Blog.

The Servo BlogOctober in Servo: better for the web, better for embedders, better for you

Servo now supports several new web platform features:

servoshell nightly showing new support for CompressionStream and synthetic bold

servoshell for macOS now ships as native Apple Silicon binaries (@jschwe, #39981). Building servoshell for macOS x86-64 still works for now, but is no longer officially supported by automated testing in CI (see § For developers).

In servoshell for Android, you can now enable experimental mode with just two taps (@jdm, #40054), use the software keyboard (@jdm, #40009), deliver touch events to web content (@mrobinson, #40240), and dismiss the location field (@jdm, #40049). Pinch zoom is now fully supported in both Servo and servoshell, taking into account the locations of pinch inputs (@mrobinson, @atbrakhi, #40083) and allowing keyboard scrolling when zoomed in (@mrobinson, @atbrakhi, #40108).

<figcaption>servoshell on Android. Left: you can now turn on experimental mode in the settings menu. Right: we now support the soft keyboard and touch events.</figcaption>

AbortController and AbortSignal are now enabled by default (@jdm, @TimvdLippe, #40079, #39943), after implementing AbortSignal.timeout() (@Taym95, #40032) and fixing throwIfAborted() on AbortSignal (@Taym95, #40224). If this is the first time you’ve heard of them, you might be surprised how important they are for real-world web compat! Over 40% of Google Chrome page loads at least check if they are supported, and many popular websites including GitHub and Discord are broken without them.

XPath is now enabled by default (@simonwuelker, #40212), after implementing ‘@attr/parent’ queries (@simonwuelker, #39749), Copy > XPath in the DevTools Inspector (@simonwuelker, #39892), completely rewriting the parser (@simonwuelker, #39977), and landing several other fixes (@simonwuelker, #40103, #40105, #40161, #40167, #39751, #39764).

Servo now supports new KeyboardEvent({keyCode}) and ({charCode}) (@atbrakhi, #39590), which is enough to get Speedometer 3.0 and 3.1 working on macOS.

servoshell nightly showing Speedometer 3.1 running successfully on macOS

ImageData can now be sent over postMessage() and structuredClone() (@Gae24, #40084).

Layout engine

Our layout engine can now render text in synthetic bold (@minghuaw, @mrobinson, #39519, #39681, #39633, #39691, #39713), and now selects more appropriate fallback fonts for Kanji in Japanese text (@arayaryoma, #39608).

‘initial-scale’ now does the right thing in <meta name=viewport> (@atbrakhi, @shubhamg13, @mrobinson, #40055).

We’ve improved the way we handle ‘border-radius’ (@Loirooriol, #39571) and margin collapsing (@Loirooriol, #36322). While they’re fairly unassuming fixes on the surface, both of them allowed us to find interop issues in the big incumbent engines (@Loirooriol, #39540, #36321) and help improve web standards (@noamr, @Loirooriol, csswg-drafts#12961, csswg-drafts#12218).

In other words, Servo is good for the web, even if you’re not using it yet!

Embedding and ecosystem

Our HTML-compatible XPath implementation now lives in its own crate, and it’s no longer limited to the Servo DOM (@simonwuelker, #39546). We don’t have any specific plans to release this as a standalone library just yet, but please let us know if you have a use case that would benefit from this!

You can now take screenshots of webviews with WebView::take_screenshot (@mrobinson, @delan, #39583).

Historically Servo has struggled with situations causing 100% CPU usage or unnecessary work on every tick of the event loop, whenever a page is considered “active” or “animating” (#25305, #3406). We had since throttled animations (@mrobinson, #37169) and reflows (@mrobinson, @Loirooriol, #38431), but only to fixed rates of 120 Hz and 60 Hz respectively.

But starting this month, you can run Servo with vsync, thanks to the RefreshDriver trait (@coding-joedow, @mrobinson, #39072), which allows embedders to tell Servo when to start rendering each frame. The default driver continues to run at 120 Hz, but you can define and install your own with ServoBuilder::refresh_driver.

Breaking changes

Servo’s embedding API has had a few breaking changes:

We’ve improved page zoom in our webview API (@atbrakhi, @mrobinson, @shubhamg13, #39738), which includes some breaking changes:

  • WebView::set_zoom was renamed to set_page_zoom, and it now takes an absolute zoom value. This makes it idempotent, but it means if you want relative zoom, you’ll have to multiply the zoom values yourself.
  • Use the new WebView::page_zoom method to get the current zoom value.
  • WebView::reset_zoom was removed; use set_page_zoom(1.0) instead.

Some breaking changes were also needed to give embedders a more powerful way to share input events with webviews (@mrobinson, #39720). Often both your app and the pages in your webviews may be interested in knowing when users press a key. Servo handles these situations by asking the embedder for all potentially useful input events, then echoing some of them back:

  1. Embedder calls WebView::notify_input_event to tell Servo about an input event, then web content (and Servo) can handle the event.
  2. Servo calls WebViewDelegate::notify_keyboard_event to tell the embedder about keyboard events that were neither canceled by scripts nor handled by Servo itself. The event details is included in the arguments.

Embedders had no way of knowing when non-keyboard input events, or keyboard events that were canceled or handled by Servo, have completed all of their effects in Servo. This was good enough for servoshell’s overridable key bindings, but not for WebDriver, where commands like Perform Actions need to reliably wait for input events to be handled. To solve these problems, we’ve replaced notify_keyboard_event with notify_input_event_handled:

  1. Embedder calls WebView::notify_input_event to tell Servo about an input event, then web content (and Servo) can handle the event. This now returns an InputEventId, allowing embedders to remember input events that they still care about for step 2.
  2. Servo calls WebViewDelegate::notify_input_event_handled to tell the embedder about every input event, when Servo has finished handling it. The event details are not included in the arguments, but you can use the InputEventId to look up the details in the embedder.

Perf and stability

Servo now does zero unnecessary layout work when updating canvases and animated images, thanks to a new “UpdatedImageData” layout mode (@mrobinson, @mukilan, #38991).

We’ve fixed crashes when clicking on web content on Android (@mrobinson, #39771), and when running Servo on platforms where JIT is forbidden (@jschwe, @sagudev, #40071, #40130).

For developers

CI builds for pull requests should now take 70% less time, since they now run on self-hosted CI runners (@delan, #39900, #39915). Bencher builds for runtime benchmarking now run on our new dedicated servers, so our Speedometer and Dromaeo data should now be more accurate and less noisy (@delan, #39272).

We’ve now switched all of our macOS builds to run on arm64 (@sagudev, @jschwe, #38460, #39968). This helps back our macOS releases with thorough automated testing on the same architecture as our releases, but we can’t run them on self-hosted CI runners yet, so they may be slower for the time being.

Work is underway to set up faster macOS arm64 runners on our own servers (@delan, ci-runners#64), funded by your donations. Speaking of which!

Donations

Thanks again for your generous support! We are now receiving 5753 USD/month (+1.7% over September) in recurring donations.

This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster macOS arm64 builds and ten-minute WPT builds.

Servo is also on thanks.dev, and already 28 GitHub users (same as September) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

5753 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

The Mozilla BlogThe writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online

woman sitting in a library holding a large white chess knight piece.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Jacque Aye, the author behind “Diary of a Sad Black Woman.” She talks about blogging culture, writing fiction for “perpetually sighing adults” and Lily Allen’s new album.

What is an internet deep dive that you can’t wait to jump back into?

Right now, I’m deep diving into Lily Allen’s newest album! Not for the gossip, although there’s plenty of that to dive into, but for the psychology behind it all. I appreciate creatives who share so vulnerably but in nuanced and honest ways. Sharing experiences is what makes us feel human, I think. The way she outlined falling in love, losing herself, struggling with insecurities, and feeling numb was so relatable to me. Now, would I share as many details? Probably not. But I do feel her.

What was the first online community you engaged with?

Blogger. I was definitely a Blogger baby, and I used to share my thoughts and outfits there, the same way I currently share on Substack. I sometimes miss those times and my little oversharing community. Most people didn’t really have personal brands then, so everything felt more authentic, anonymous and free.

What is the one tab you always regret closing?

Substack! I always find the coolest articles, save the tab, then completely forget I meant to read it, ahhhh.

What can you not stop talking about on the internet right now?

I post about my books online to an obsessive and almost alarming degree, ha. I’ve been going on and on about my weird, whimsical, and woeful novels, and people seem to resonate with that. I describe my work as Lemony Snicket meets a Boots Riley movie, but for perpetually sighing adults. I also never, ever shut up about my feelings. You can even read my diary online. For free. On Substack.

If you could create your own corner of the internet, what would it look like?

I feel super lucky to have my own little corner of the internet! In my corner, we love wearing cute outfits, listening to sad girl music, watching Tim Burton movies, and reading about flawed women going through absurd trials.

What articles and/or videos are you waiting to read/watch right now?

I can’t wait to settle in and watch Knights of Guinevere! It looks so, so good, and I adore the creator.

What is your favorite corner of the internet?

This will seem so random, but right now, besides Substack, I’m really loving Threads. People are so vulnerable on there, and so willing to share personal stories and ask for help and advice. I love any space where I can express the full range of my feelings… and also share my books and outfits, ha.

How do you imagine the next version of the internet supporting creators who lead with emotion and care?

I really hope the next version of the internet reverts back to the days of Blogger and Tumblr. Where people could design their spaces how they see fit, integrate music and spew their hearts out without all the judgment.


Jacque Aye is an author and writes “Diary of a Sad Black Woman” on Substack. As a woman who suffers from depression and social anxiety, she’s made it her mission to candidly share her experiences with the hopes of helping others dealing with the same. This extends into her fiction work, where she pens tales about woeful women trying their best, with a surrealist, magical touch. Inspired by authors like Haruki Murakami, Sayaka Murata, and Lemony Snicket, Jacque’s stories are dark, magical, and humorous with a hint… well, a bunch… of absurdity.

The post The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online appeared first on The Mozilla Blog.

The Mozilla BlogIntroducing AI, the Firefox way: A look at what we’re working on and how you can help shape it

Illustration of Firefox browser showing menu options for Current, AI, and Private windows with glowing effects.

We recently shared how we are approaching AI in Firefox — with user choice and openness as our guiding principles. That’s because we believe AI should be built like the internet —  open, accessible, and driven by choice — so that users and the developers helping to build it can use it as they wish, help shape it and truly benefit from it.

In Firefox, you’ll never be locked into one ecosystem or have AI forced into your browsing experience. You decide when, how or whether to use it at all. You’ve already seen this approach in action through some of our latest features like the AI chatbot in the sidebar for desktop or Shake to Summarize on iOS. 

Now, we’re excited to invite you to help shape the work on our next innovation: an AI Window. It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it’s not for you, you can choose to switch it off.

As always, we’re building in the open — and we want to build this with you. Starting today, you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback. 

Firefox logo with orange fox wrapped around purple globe.

AI Window: Built for choice & control

Join the waitlist

We’re building a better browser, not an agenda

We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice — either use AI all the time or don’t use it at all.

We’re focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it’s useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.

Regardless of your choice, with Firefox, you’re in control. 

You can continue using Firefox as you always have for the most customizable experience, or switch from classic to Private Window for the most private browsing experience. And now, with AI Window, you have the option to opt in to our most intelligent and personalized experience yet — providing you with new ways to interact with the web.

Why is investing in AI important for Firefox?

With AI becoming a more widely adopted interface to the web, the principles of transparency, accountability, and respect for user agency are critical to keeping it free, open, and accessible to all. As an independent browser, we are well positioned to uphold these principles.

While others are building AI experiences that keep you locked in a conversational loop, we see a different path — one where AI serves as a trusted companion, enhancing your browsing experience and guiding you outward to the broader web.

We believe standing still while technology moves forward doesn’t benefit the web or humanity. That’s why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less.

Help us shape the future of the web 

Our success has always been driven by our community of users and developers, and we’ll continue to rely on you as we explore how AI can serve the web — without ever losing focus on our commitment to build what matters most to our users: a Firefox that remains fast, secure and private. 

Join us by contributing to open-source projects and sharing your ideas on Mozilla Connect.

The post Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it appeared first on The Mozilla Blog.

Mozilla Privacy BlogBehind the Manifesto: The Survivors of the Open Web

Welcome to the blog series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents Mozilla’s commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology. 

 

The internet wasn’t always a set of corporate apps and walled gardens. In its early days, it was a place of experimentation — a digital commons where anyone could publish, connect, and build without asking permission. That openness depended on invisible layers of technology that allowed the web to function as a true public space. Layers such as browser engines, open standards, and shared protocols are the scaffolding that made the internet free, creative, and interoperable.

In 2013, there were five major browser engines. Now, only three remain: Apple’s WebKit, Google’s Blink, and Mozilla’s Gecko (which powers Firefox). In a world of giants, Gecko fights not for dominance, but for an internet that is open and accessible to all.

In an era of consolidation, a thriving and competitive browser engine ecosystem is critical. But sadly, browser engines are subject to the same trends towards concentration. As we’ve lost competitors, we lose more than a piece of code. We lose choice, perspectives, and ideas about how the web works.

So, how do we drive competition in browser engines and more widely across the web? How do we promote policies that protect people and encourage meaningful choice? How do we contend with AI as both a disruptor and an impetus for innovation? Can competition interventions protect the open web? What’s the impact of landmark antitrust cases for consumers and the future technology landscape?

These aren’t new questions for Mozilla. They’re the same questions that have shaped our mission for more than 20 years, and the ones we continue to ask today. Our recent Mozilla Meetup in Washington D.C., a panel-style event and happy hour, brought these debates to the forefront.

On October 8th, we convened leading minds in tech policy to explore the future of competition and its role in saving the open web. Before a standing-room-only audience, the panelists discussed browser competition, leading antitrust legislation, landmark cases currently under review, and AI’s impact. Their insights underscored a critical point: the same questions about access, agency and choice that defined parts of the early internet are just as pressing in today’s digital ecosystem, shaping our continued pursuit of an open and diverse web. Below are a few takeaways.

On today’s competition landscape:

Luke Hogg, Director, Technology Policy, Foundation for American Innovation:

“Antitrust is back. One of the emerging lessons of the last year in antitrust cases and competition policy is that with these big questions being answered, the results do tend to be bipartisan. Antitrust is a cross-partisan issue.”

On the United States v. Google LLC search case: 

Kush Amlani, Director, Global Competition & Regulation, Mozilla:

“One of our key concerns was ensuring that search competition didn’t come at the expense of browser competition. And the payments to independent browsers were not banned, and that was obviously granted by the judge…What’s next is really how the remedies are implemented, and how effective they are. And the devil is going to be in the detail, in terms of how useful is this data? How much can third parties benefit from syndicating search results?” 

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The search case is set up as being pro-divestiture or anti-divestiture, but it’s really about what is going to work. Divestiture aligns with what was requested. If you leave Chrome under Google, you have to build in surveillance and monitoring in the market to make sure their behavior aligns. If you divest, it becomes independent and can operate on its own without the need for monitoring. In the end, do you think that would be an effective remedy to open the market to reentry? Or do you think there is another option?”

On the impact of AI: 

Amba Kak, Co-Executive Director, AI Now Institute:

“AI has upended the market and changed technology, but it’s also true Big Tech, in many ways, has been training for this very disruption for the last ten years. 

In the early 2010s, key resources — data, compute, talent — were already concentrated within a few players due to regulatory inaction. It’s important to understand that this trajectory of AI aligning with the incentives of Big Tech isn’t an accident, it’s by design.”

On the timing of this fight for the open web:

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The difference now [as opposed to previous fights for the web] is that we have a lot of experience. We know what the open world and open web look like. In some ways, this is an advantage. The difference now is the unbelievable amount of corporate power involved. There needs to be a field where new businesses can enter. Without it, we are fighting the last war.”

 

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla’s policy priorities.

 

The post Behind the Manifesto: The Survivors of the Open Web appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla joins the Digital Public Goods Alliance, championing open source to drive global progress

Today, Mozilla is thrilled to join the Digital Public Goods Alliance (DPGA) as its newest member. The DPGA is a UN-backed initiative that seeks to advance open technologies and ensure that technology is put to use in the public interest and serves everyone, everywhere — like Mozilla’s Common Voice, which has been recognized as a Digital Public Good (DPG). This announcement comes on the heels of a big year of digital policy-making globally, where Mozilla has been at the forefront in advocating for open source AI across Europe, North America and the UK. 

The DPGA is a multi-stakeholder initiative with a mission to accelerate the attainment of the Sustainable Development Goals (SDGs) “by facilitating the discovery, development, use of and investment in digital public goods.” Digital public goods means open-source technology, open data, open and transparent AI models, open standards and open content that adhere to privacy, the do no harm principle, and other best practices. 

This is deeply aligned with Mozilla’s mission. It creates a natural opportunity for collaboration and shared advocacy in the open ecosystem, with allies and like-minded builders from across the globe. As part of the DPGA’s Annual Roadmap for 2025, Mozilla will focus on three work streams: 

  1. Promoting DPGs in the Open Source Ecosystem: Mozilla has long championed open-source, public-interest technology as an alternative to profit-driven development. Through global advocacy, policy engagement, and research, we highlight the societal and economic value of open-source, especially in AI. Through our work in the DPGA,, we’ll continue pushing for better enabling conditions and funding opportunities for open source, public interest technology. 
  2. DPGs and Digital Commons: Mozilla develops and maintains a range of open source projects through our various entities. These include Common Voice, a digital public good with over 33,000 hours of multilingual voice data, and applications like the Firefox web browser and Thunderbird email client. Mozilla also supports open-source AI through our product work, including by Mozilla.ai, and our venture fund, Mozilla Ventures
  3. Funding Open Source & Public Interest Technology: Grounded by our own open source roots, Mozilla will continue to fund open source technologies that help to untangle thorny sociotechnical issues. We’ve fueled a broad and impactful portfolio of technical projects. Beginning in the Fall of 2025, we will introduce our latest grantmaking program: an incubator that will help community-driven projects find “product-community fit” in order to attain long-term sustainability.

We hope to use our membership to share research, tooling, and perspectives with a like-minded audience and partner with the DPGA’s diverse community of builders and allies. 

“Open source AI and open data aren’t just about tech,” said Mark Surman, president of Mozilla. “They’re about access to technology and progress for people everywhere. As a double bottom line, mission-driven enterprise, Mozilla is proud to be part of the DPGA and excited to work toward our joint mission of advancing open-source, trustworthy technology that puts people first.” 

To learn more about DPGA, visit https://digitalpublicgoods.net

The post Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress  appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter 145

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 145 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 145, a new contributor landed two patches in our codebase. Thanks to Khalid AlHaddad for the following fixes:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Niko MatsakisJust call clone (or alias)

Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I’m calling “just call clone (or alias)”. This proposal specializes the clone and alias methods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases in move closures where needed.

The goal of this proposal is to simplify the user’s mental model: whenever you see an error like “use of moved value”, the fix is always the same: just call clone (or alias, if applicable). This model is aiming for the balance of “low-level enough for a Kernel, usable enough for a GUI” that I described earlier. It’s also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created – but that it’s ok if the fine-grained details around exactly when the alias is created is a bit subtle.

The proposal in a nutshell

Part 1: Closure desugaring that is aware of clones and aliases

Consider this move future:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(async move {
        //                   ---- move future
        manage_io(cx.io_system.alias(), cx.request_name.clone());
        //        --------------------  -----------------------
    });
    ...
}

Because this is a move future, this takes ownership of cx.io_system and cx_request_name. Because cx is a borrowed reference, this will be an error unless those values are Copy (which they presumably are not). Under this proposal, capturing aliases or clones in a move closure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            //     --------------------  -----------------------
            //     capture alias/clone respectively

            manage_io(cx.io_system.alias(), cx.request_name.clone());
        }
    );
    ...
}

Part 2: Last-use transformation

Now, this result is inefficient – there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to alias or clone that are not needed to satisfy the borrow checker and remove them. This code would therefore become:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            manage_io(cx.io_system, cx.request_name);
            //        ------------  ---------------
            //        converted to moves
        }
    );
    ...
}

The last-use transformation would apply beyond closures. Given an example like this one, which clones id even though id is never used later:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id.clone());
    //                                       ----------
    //                                       unnecessary
    send_request(request)
}

the user would get a warning like so1:

warning: unnecessary `clone` call will be converted to a move
 --> src/main.rs:7:40
  |
8 |     let request = Request::ProcessIdentifier(id.clone());
  |                                              ^^^^^^^^^^ unnecessary call to `clone`
  |
  = help: the compiler automatically removes calls to `clone` and `alias` when not
    required to satisfy the borrow checker
help: change `id.clone()` to `id` for greater clarity
  |
8 -     let request = Request::ProcessIdentifier(id.clone());
8 +     let request = Request::ProcessIdentifier(id);
  |

and the code would be transformed so that it simply does a move:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id);
    //                                       --
    //                                   transformed
    send_request(request)
}

Mental model: just call “clone” (or “alias”)

The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call clone (or alias). It doesn’t matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).

I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like & in more-or-less at random as they try to develop a firm mental model – this is where the “keep calm and call clone” joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.

Experienced users can trust the compiler to get it right

But the real question is how this works for experienced users. I’ve been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:

“What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.”

The first half is clearly satisfied. If you don’t call clone or alias, this proposal has no impact on your life.

The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I’d like to hear if there are things I’m overlooking.

The last-use transformation codifies a widespread intuition, that clone is never necessary

I think most users would expect that changing message.clone() to just message is fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that make clone significant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that “significant clones” have another name. I think this is a good thing.

Frequently asked questions

I think I’ve covered the key points. Let me dive into some of the details here with a FAQ.

Can you summarize all of these posts you’ve been writing? It’s a lot to digest!

I get it, I’ve been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:

  • I believe our goal should be to focus first on a design that is “low-level enough for a Kernel, usable enough for a GUI”.
    • The key part here is the word enough. We need to make sure that low-level details are exposed, but only those that truly matter. And we need to make sure that it’s ergonomic to use, but it doesn’t have to be as nice as TypeScript (though that would be great).
  • Rust’s current approach to Clone fails both groups of users;
    • calls to clone are not explicit enough for kernels and low-level software: when you see something.clone(), you don’t know that is creating a new alias or an entirely distinct value, and you don’t have any clue what it will cost at runtime. There’s a reason much of the community recommends writing Arc::clone(&something) instead.
    • calls to clone, particularly in closures, are a major ergonomic pain point, this has been a clear consensus since we first started talking about this issue.

I then proposed a set of three changes to address these issues, authored in individual blog posts:

  • First, we introduce the Alias trait (originally called Handle). The Alias trait introduces a new method alias that is equivalent to clone but indicates that this will be creating a second alias of the same underlying value.
  • Second, we introduce explicit capture clauses, which lighten the syntactic load of capturing a clone or alias, make it possible to declare up-front the full set of values captured by a closure/future, and will support other kinds of handy transformations (e.g., capturing the result of as_ref or to_string).
  • Finally, we introduce the just call clone proposal described in this post. This modifies closure desugaring to recognize clones/aliases and also applies the last-use transformation to replace calls to clone/alias with moves where possible.

What would it feel like if we did all those things?

Let’s look at the impact of each set of changes by walking through the “Cloudflare example”, which originated in this excellent blog post by the Dioxus folks:

let some_value = Arc::new(something);

// task 1
let _some_value = some_value.clone();
tokio::task::spawn(async move {
    do_something_with(_some_value);
});

// task 2:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
  	do_something_else_with(_some_a, _some_b, _some_c)
});

As the original blog post put it:

Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say “lol get gud,” but the engineers on this team were the sharpest people I’ve ever worked with. Cloudflare is all-in on Rust. They’re willing to throw money at codebases like this. Nuclear fusion won’t be solved with Rust if this is how sharing state works.

Applying the Alias trait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls to clone are alias calls, and you don’t have the awkward _some_value and _some_a variables. However, the code is still pretty verbose:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value);
});

// task 2:  listen for dns connections
tokio::task::spawn(async move(
    self.some_a.alias(),
    self.some_b.alias(),
    self.some_c.alias(),
) {
  	do_something_else_with(self.some_a, self.some_b, self.some_c)
});

Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to alias reveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call to self.some_a.alias() will actually occur when the future is created and not when it is awaited:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

// task 2:  listen for dns connections
tokio::task::spawn(async move {
  	do_something_else_with(
        self.some_a.alias(),
        self.some_b.alias(),
        self.some_c.alias(),
    )
});

I’m worried that the execution order of calls to alias will be too subtle. How is thie “explicit enough for low-level code”?

There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:

tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

this gets desugared to a call to alias when the future is created (not when it is awaited). Using the explicit form:

tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value)
});

I can definitely imagine people getting confused at first – “but that call to alias looks like its inside the future (or closure), how come it’s occuring earlier?”

Yet, the code really seems to preserve what is most important: when I search the codebase for calls to alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn’t matter. Look at this code: the important thing is that do_something_with is called with an alias of some_value, so some_value will stay alive as long as do_something_else is executing. It doesn’t really matter how the “plumbing” worked.

What about futures that conditionally alias a value?

Yeah, good point, those kind of examples have more room for confusion. Like look at this:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value.alias());
    }
});

In this example, there is code that uses some_value with an alias, but only under if false. So what happens? I would assume that indeed the future will capture an alias of some_value, in just the same way that this future will move some_value, even though the relevant code is dead:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value);
    }
});

Can you give more details about the closure desugaring you imagine?

Yep! I am thinking of something like this:

  • If there is an explicit capture clause, use that.
  • Else:
    • For non-move closures/futures, no changes, so
      • Categorize usage of each place and pick the “weakest option” that is available:
        • by ref
        • by mut ref
        • moves
    • For move closures/futures, we would change
      • Categorize usage of each place P and decide whether to capture that place…
        • by clone, there is at least one call P.clone() or P.alias() and all other usage of P requires only a shared ref (reads)
        • by move, if there are no calls to P.clone() or P.alias() or if there are usages of P that require ownership or a mutable reference
      • Capture by clone/alias when a place a.b.c is only used via shared references, and at least one of those is a clone or alias.
        • For the purposes of this, accessing a “prefix place” a or a “suffix place” a.b.c.d is also considered an access to a.b.c.

Examples that show some edge cased:

if consume {
    x.foo().
}

Why not do something similar for non-move closures?

In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:

let f = async {
    //  ----- NOT async move
    self.some_a.alias()
};

do_something_else(self.some_a.alias());
//                ----------- later use succeeds

f.await;

This future does not need to take ownership of self.some_a to create an alias, so it will just capture a reference to self.some_a. That means that later uses of self.some_a can still compile, no problem. If this had been a move closure, however, that code above would currently not compile.

There is an edge case where you might get an error, which is when you are moving:

let f = async {
    self.some_a.alias()
};

do_something_else(self.some_a);
//                ----------- move!

f.await;

In that case, you can make this an async move closure and/or use an explicit capture clause:

Can you give more details about the last-use transformation you imagine?

Yep! We would during codegen identify candidate calls to Clone::clone or Alias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:

  • Will this place be accessed later?
  • Will some reference potentially referencing this place be accessed later?

If the answer to both questions is no, then we will replace the call with a move of the original place.

Here are some examples:

fn borrow(message: Message) -> String {
    let method = message.method.to_string();

    send_message(message.clone());
    //           ---------------
    //           would be transformed to
    //           just `message`

    method
}
fn borrow(message: Message) -> String {
    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `message.method` is
    //           referenced later

    message.method.to_string()
}
fn borrow(message: Message) -> String {
    let r = &message;

    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `r` may reference
    //           `message` and is used later.

    r.method.to_string()
}

Why are you calling it the last-use transformation and not optimization?

In the past, I’ve talked about the last-use transformation as an optimization – but I’m changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.

Would the transformation “see through” references?

I think yes, but in a limited way. In other words I would expect

Clone::clone(&foo)

and

let p = &foo;
Clone::clone(p)

to be transformed in the same way (replaced with foo), and the same would apply to more levels of intermediate usage. This would kind of “fall out” from the MIR-based optimization technique I imagine. It doesn’t have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.

On the other hand, you could still fool it e.g. like so

fn identity<T>(x: &T) -> &T { x }

identity(&foo).clone()

Would the transformation apply across function boundaries?

The way I imagine it, no. The transformation would be local to a function body. This means that one could write a force_clone method like so that “hides” the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):

fn pipe<Msg: Clone>(message: Msg) -> Msg {
    log(message.clone()); // <-- keep this one
    force_clone(&message)
}

fn force_clone<Msg: Clone>(message: &Msg) -> Msg {
    // Here, the input is `&Msg`, so the clone is necessary
    // to produce a `Msg`.
    message.clone()
}

Won’t the last-use transformation change behavior by making destructors run earlier?

Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an Alias trait:

async fn process_and_stuff(tx: mpsc::Sender<Message>) {
    tokio::spawn({
        async move(tx.alias()) {
            //     ---------- alias here
            process(tx).await
        }
    });

    do_something_unrelated().await;
}

The precise timing when Sender values are dropped can be important – when all senders have dropped, the Receiver will start returning None when you call recv. Before that, it will block waiting for more messages, since those tx handles could still be used.

So, in process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:

  • Without the transformation, there are two aliases: the original tx and the one being held by the future. So the receiver will only start returning None when do_something_unrelated has finished and the task has completed.
  • With the transformation, the call to tx.alias() is removed, and so there is only one alias – tx, which is moved into the future, and dropped once the spawned task completes. This could well be earlier than in the previous code, which had to wait until both process_and_stuff and the new task completed.

Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs – a typical example is a Mutex<()> where the guard is being used to protect some external resource.

How can we change when code runs? Doesn’t that break stability?

This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.

The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it’s just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call-by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).

Does that mean that the last-use transformation would change with Polonius or other borrow checker improvements?

In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It’s a bit of a pain, but I think we can live with it – and I’m unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.

Isn’t it weird to do this after borrow check?

This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:

let p: *const T = &*some_place;

let q: T = some_place.clone();
//         ---------- assuming `some_place` is
//         not used later, becomes a move

unsafe {
    do_something(p);
    //           -
    // This now refers to a stack slot
    // whose value is uninitialized.
}

Note though that, in this case, there would be a lint identifying that the call to some_place.clone() will be transformed to just some_place. We could also detect simple examples like this one and report a stronger deny-by-default lint, as we often do when we see guaranteed UB.

Shouldn’t we use a keyword for this?

When I originally had this idea, I called it “use-use-everywhere” and, instead of writing x.clone() or x.alias(), I imagined writing x.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I’ve changed my mind for a few reasons.

First, Santiago Pastorino gave strong pushback that x.use was going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means – in contrast, if they see method calls, they will likely not even notice something strange is going on.

The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I’ve heard with the Alias trait], which is that there are things you want to ergonomically clone but which don’t correspond to “aliases”. True.

In general I think that clone (and alias) are fundamental enough to how Rust is used that it’s ok to special case them. Perhaps we’ll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.

What about “deferred ref-counting”?

One point that I’ve raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:

fn use_data(rc: Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

This function requires ownership of an alias to a ref-counted value but it doesn’t actually do anything but read from it. A caller like this one…

use_data(source.alias())

…doesn’t really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a &:

fn use_data(rc: &Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

so that the caller can do use_data(&source) – this then allows the callee to write rc.alias() in the case that it wants to take ownership.

I’ve basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use &Arc and the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.


  1. Surprisingly to me, clippy::pedantic doesn’t have a dedicated lint for unnecessary clones. This particular example does get a lint, but it’s a lint about taking an argument by value and then not consuming it. If you rewrite the example to create id locally, clippy does not complain↩︎

The Mozilla BlogFirefox expands fingerprint protections: advancing towards a more private web

With Firefox 145, we’re rolling out major privacy upgrades that take on browser fingerprinting — a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you’re in private browsing. These protections build on Mozilla’s long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.

Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup — ranging from your time zone to your operating system settings — that together create a “fingerprint” identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser’s private browsing mode.

Protecting people’s privacy has always been core to Firefox. Since 2020, Firefox’s built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.

Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren’t in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.

How we built stronger defenses

Drawing from a global analysis of how real people’s browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.

How Firefox protects you

These fingerprinting protections work on multiple layers, building on Firefox’s already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection

Beyond blocking trackers, Firefox also limits the information it makes available to websites — a privacy-by-design approach — that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer.  But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.  

Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.

Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.

Our research shows these improvements cut the percentage of users seen as unique by almost half.

Firefox’s new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox’s approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don’t get in your way.

What’s next for your privacy

If you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox’s fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically — no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.

Take Firefox with you

Download Firefox Mobile

The post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.